Stabilization over discrete memoryless and wideband channels using nearly memoryless observations

Size: px
Start display at page:

Download "Stabilization over discrete memoryless and wideband channels using nearly memoryless observations"

Transcription

1 Stabilization over discrete memoryless and wideband channels using nearly memoryless observations Anant Sahai Abstract We study stabilization of a discrete-time scalar unstable plant over a noisy communication link, where the plant is perturbed at each time by an unknown, but bounded, additive disturbance. While allowing unbounded computational complexity at the decoder/controller, we restrict the observer/encoder to be nearly memoryless in that the channel inputs can only depend on the last sample taken of the plant state. A time-varying randomized encoding scheme is given for discrete memoryless channels, while a timeinvariant deterministic scheme is used for a peak-power constrained wideband channel. These schemes are shown to achieve stabilization in an η-moment sense. U(t 1) 1 Step Delay W (t 1) Scalar System X(t) U(t) Control Signals Encoder/ Observer O Decoder/ Controller C Noisy Channel I. INTRODUCTION The problem of control in the presence of communication constraints is illustrated in figure 1. This is a problem of cooperative distributed control in that there are two different boxes that need to be designed: the observer/encoder that has access to the plant state and generates an input to the noisy channel, and the controller which has access to the channel outputs and applies a control to the plant. The reader is referred to two resources to get the appropriate background. The first is the classic 1978 paper by Ho, Kastner, and Wong [5] in which they summarize the then known relationships among team theory, signaling, and information theory from the perspective of automatic control. The more recent interest in the subject is best understood by reading the recent September 2004 special issue of Transactions on Automatic Control and the references contained therein. 1 A. Sahai is with the department of Electrical Engineering and Computer Sciences at the University of California at Berkeley sahai@eecs.berkeley.edu 1 Due to space limitations, I only cite references specifically helpful for understanding the present work, not for giving full and proper historical background. Fig. 1. Control over a noisy communication channel without any explicit feedback path from controller to observer. Here, we focus exclusively on the case shown in figure 1 where the information patterns [10] are non-nested in that the observer does not know everything the controller knows. This means that there is no explicit feedback of the channel outputs to the observer. Our previous work [7] showed that in the case of finite alphabet channels, the plant itself could be used as a feedback channel for communication. The controller can make the plant dance so as to noiselessly communicate the channel output to the encoder/observer. This showed that in principle, the non-nested information pattern did not increase the difficulty of the stabilization problem. [7] further showed that the problem of stabilization over a noisy channel was equivalent to a problem of anytime communication using the noisy channel with perfect feedback. While this equivalence certainly illuminates the structure of distributed control problems (see [9] for more details), it does not actually allow us to solve any particular problem. This is espe-

2 cially true since the previously known generic constructions for anytime codes are based on infinite random trees. As such, they require growing complexity at both the encoder and decoder. This contrasts in spirit with the idea of stabilization making a system behave in a nice steady-state manner that returns to the same few safe states over and over again. If the closed-loop system is behaving nicely, why should the observer and controller have a steadily increasing workload? In this paper, we attack this problem by making a structural requirement on the observer/encoder it must be a nearly memoryless map (though possibly time-varying and random in nature) that picks channel outputs based only on the current state of the plant. As such, we are aiming to generalize the nicely matched situation that exists when stabilizing a scalar linear plant over an average power-constrained additive white Gaussian noise (AWGN) channel.[1] In that special case, memoryless time-invariant observers (scalar gains) suffice. In section II, we formally introduce the problems of stabilization and anytime communication and review the existing results from [7] and [8]. In section III, we state and prove the main result of this paper a sufficient condition for stabilization using nearly memoryless random time-varying encoder/observers over discrete memoryless channels and a parallel sufficient condition for stabilization using a nearly memoryless time-invariant deterministic encoder/observer for the wideband channel. Finally, in section IV, we comment on the results and the wideband channel construction. In particular, we argue why the power-spectral density of our nominally infinite-bandwidth construction is actually somewhat reasonable. II. PROBLEM DEFINITION AND REVIEW A. Stabilization Problem At the heart of the stabilization problem is a scalar plant that is unstable in open loop. Expressed in discrete-time, we have: X(t + 1) = λx(t) + U(t) + W (t), t 0 (1) where {X(t)} is a IR-valued state process. {U(t)} is a IR-valued control process and {W (t)} is a bounded noise/disturbance process s.t. W (t) Ω 2 a.s. λ 1 so the system is unstable while X(0) = 0 for convenience. 2 The distributed nature of the problem comes from having a noisy communication channel in the feedback loop. We require an observer/encoder system O to observe X(t) and generate inputs A(t) to the channel. We also require a decoder/controller system C to observe channel outputs B(t) and generate control signals U(t). Unless otherwise specified, we allow both O, C to have arbitrary memory and to be nonlinear in general. Definition 2.1: A closed-loop dynamic system with state X(t) is η-stable if there exists a constant K s.t. E[ X(t) η ] K for all t 0. Holding the η-moment within bounds is a way of keeping large deviations rare. 3 The larger η is, the more strongly we penalize very large deviations. B. Anytime communication For simplicity of notation, let M i be the R bit message that a channel encoder takes in at time i. Based on all the messages received so far, and any additional information (e.g. past channel output feedback) that it might have, the channel encoder emits the i-th channel input. An anytime decoder provides estimates ˆMi (t), the best estimate for message i at time t. If we are considering a delay d, the probability of error we are interested in is P ( ˆM t d (t) M t d ). Definition 2.2: The α-anytime capacity C anytime (α) of a channel is the least upper bound of the rates at which the channel can be used to transmit data so that there exists a uniform constant K such that for all d and all times t we have 4 P ( ˆM t d (t) M t d (t)) K2 αd The probability is taken over the channel and any randomness that we deem the encoder and 2 Just start time at 1 if you want an unknown but bounded initial condition. 3 [9] shows how to map the results given here to almost-sure stabilization in the undisturbed case when W (t) = 0. 4 We could alternatively have bounded the probability of error by 2 α(d log 2 K) and interpreted log 2 K as a minimum required known delay.

3 bit-slot 1 bit-slot 2 bit-slot 3 bit-slot Example waveform for 0,1,0,1,1 E b 2 k 0 0, 1 0, 1, 0 0, 1, 0, 1 0, 1, 0, 1, 1 Fig. 3. The first few members of a countable set of basic codefunctions on [0, ] that are all orthogonal to each other. These represent windowed sinusoids at distinct frequencies. Fig. 2. Repeated pulse position modulation illustrated. The time slots are on the top and the possible sub-slots are on the bottom. decoder to have access to. Due to the fast convergence of an exponential, it is equivalent to requiring that the probability of error for all messages sent upto time t d is bounded by K 2 αd. In [8], we studied the special case of the infinite bandwidth AWGN channel with an average power constraint but no feedback. Since the channel is continuous-time, it is up to us how to discretize it. We chose a = 1 > 0 so that exactly 1 new R bit arrives at the encoder every seconds. For that case, we defined a coding strategy called repeated pulse position modulation illustrated in figure 2. This coding strategy sent a single pulse with energy E b whose position in the next seconds represented the value of all the bits received up until this point. The decoding of the bits so far proceeded by maximum likelihood as applied to the entire received channel output so far. Theorem 2.3: [8] The repeated pulse position modulation code under maximum-likelihood decoding for individual bits with delay d achieves the orthogonal coding error exponent for every delay and bit position. P ( ˆM i (i + d) M i ) K 2 de orth(r) where C = P N 0 log 2 e and: (2) ( C R) if 0 R C 2 4 E orth (R) = ( C R) 2 if C < R < C 4 0 otherwise (3) where P is the average transmit power and N 0 represents the noise intensity. Thus the energy per bit E b > N 0 ln 2 for communication to proceed. The only property of repeated PPM required to prove this result was captured by this lemma: Lemma 2.1: [8] The repeated PPM code is semi-orthogonal. If a is the waveform corresponding to the bitstream M, and a is the waveform corresponding to the bitstream M, then a([(j 1), T ] is orthogonal to a ([(j 1), T ] whenever there exists a bit position i j for which M i M i. The only property of time-disjoint pulses that we needed was that they formed an orthogonal family of functions. We could just as well use another family of orthogonal functions. For example, consider frequency disjoint pulses as depicted in figure 3. g i, (t) = if t < 0 4πi sin( t) if 0 t, i 0 sin( 2π( 2i 1) t) if 0 t, i < 0 0 if t > (4) The g i, family is also orthogonal and can be used to make a semiorthogonal code with the added advantage of meeting a strict amplitude (or peak power) constraint. C. Anytime communication and stabilization Theorem 2.4: [7] For a given noisy channel, bound Ω, and η > 0, if there exists an observer/encoder O and controller/decoder C for the unstable scalar system that achieves E[ X(t) η ] < K for all bounded driving noise Ω 2 W (t) Ω 2, then C anytime (η log 2 λ) log 2 λ bits per channel

4 use for the noisy channel considered with noiseless feedback. And for the non-nested information pattern case: Theorem 2.5: [7] It is possible to control an unstable scalar process driven by a bounded disturbance over a noisy finite output-alphabet channel so that the η-moment of X(t) stays finite for all time if the channel with feedback has C anytime (α) log 2 λ for some α > η log 2 λ if the observer is allowed to observe the state X(t) exactly. The encoders and decoders used in the proofs of these theorems were far from simple. Our goal in the next section will be to consider simplified encoders. III. MAIN RESULT We first introduce what we mean by a nearly memoryless observer/encoder. Then we state the main result, followed by a sketch of its proof. A. Nearly memoryless observer/encoders By nearly memoryless observers, we mean O functions that sample the plant state every T time units (for some T > 0), and then apply a channel input that depends only on the last such sample. Definition 3.1: A random nearly memoryless observer is a sequence of maps O t such that there exist O t so that: O t (X t 0, Z t 0) = O t(x(t t T ), Z t T ) where the Z i are the iid continuous uniform random variables on [0, 1] that represent the commonrandomness available at both the observer/encoder and the controller/decoder. Definition 3.2: A time-invariant deterministic nearly memoryless observer is a sequence of maps O t such that there exist O so that: O t (X t 0, Z t 0) = O (X(T t T )) where the Z i are the iid continuous uniform random variables on [0, 1] that represent the commonrandomness available at both the observer/encoder and the controller/decoder. B. Main theorems and proof Theorem 3.3: We can η-stabilize an unstable scalar process driven by a bounded disturbance over a discrete memoryless channel if the channel without feedback has random block-coding error exponent E r (R) > η log 2 λ for some R > log 2 λ and the observer is only allowed access to the plant state where E r (R) = max P A ( max ρr log 2 ρ [0,1] b a ) 1+ρ P A (a)p 1 1+ρ ab where p ab represents the probability of an a b transition in the discrete memoryless channel. Furthermore, there exists a T > 0 so this is possible by using an nearly memoryless random encoder/observer consisting of a time-varying random scalar quantizer that samples the state every n time steps and outputs a random label for the bin index. This random label is chosen iid from the channel input alphabet A T according to the distribution that maximizes the random coding error exponent at R. The controller is assumed to have access to the common randomness used to choose the the random bin labels. and in the case of the wideband channel: Theorem 3.4: We can η-stabilize an unstable scalar process driven by a bounded disturbance over a continuous-time wideband AWGN channel with average power constraint P if E orth (R) > η log 2 λ for some R > log 2 λ and the observer is only allowed access to the plant state. Furthermore, there exists a T > 0 so this is possible by using a time-invariant deterministic nearly memoryless encoder/observer consisting of a scalar quantizer Q that samples the state every T time steps and outputs the function P T gq( X( t T )),T (t T t ) into the channel. T Proof of both theorems: Pick a rate R > log 2 λ for which E r (R) > η log 2 λ. Pick T, large enough so that: T R > log 2 (λ T i=0 + λt i Ω ) This means that if X(t) is known to be within any given box of size, then with no controls applied, the plant state X(t+T ) can be in no more than 2 T R adjacent boxes each of size. It is clear that such a T, pair always exists as long as R > log 2 λ.

5 Our quantizer Q will look at the plant state X to a coarseness. It is clear that we satisfy the following: a. Knowing that X(t) is in a given bin of width and assuming that no controls are applied, there are at most 2 T R possible bins which could contain X(t + T ). b. The descendants of a given bin dt time units later are all in contiguous bins and furthermore, there exists a constant K such that the total length covered by these bins is Kλ dt. Properties [a] and [b] above easily extend to the case when the control sequence between time X(t) and X(t+T ) is known exactly since linearity tells us that the impact of these controls is just a translation of the all the bins by a known amount. Thus, we have: c. Conditioned on actual past controls applied, the set of possible paths that the states X(0), X(T ), X(2T ),... could have taken through the quantization bins is a subset of a trellis that has a maximum branching factor of 2 T R Furthermore, the total length covered by the d-stage descendants of any particular bin is bounded above by Kλ dt. Not all such the paths through the trellis are necessarily possible, but all possible paths do lie within the trellis. Figure 4 shows what such a trellis looks like and figure 5 shows its tree-like local structure. In the case of a DMC, the observer/encoder just uses the common randomness to independently assign symbols from A T as time-varying labels to the bins. The probability measure used to draw the symbols should be the E r (R) achieving distribution. Thus, the labels on each bin are iid through both time and across bins. In the case of the wideband channels, the g i,t function is used with i being the integer quantization bin number of the past observed state to a coarseness of. In this case, it is clear that different states result in orthogonal channel inputs. Call two paths through the trellis disjoint with depth d if their last common node was at depth d and the paths are disjoint after that. We immediately observe: d. If two paths are disjoint in the trellis at a depth of d, then the channel inputs corresponding to t=0 t=1 t=2 t=3 t= R = log 2 3 Fig. 4. A short segment of the randomly labeled regular trellis from the point of view of the controller that knows the actual control signals applied in the past. The example has R = log 2 3 and λ 2.4 with large. the past dt channel uses are independent of each other in the DMC case and are orthogonal to each other in case of the wideband channel. Our controller just searches for the ML path through the trellis. The trellis itself is constructed based on the controller s memory of all past applied controls. Once we have the ML path, a control signal is applied based on the estimated state bin at the end of the ML path. This control signal is designed to drive the center of the box to zero in the next time step. Fix a time t and consider an error event at depth d. This represents the case that the maximum likelihood path last intersected with the true path dt time steps ago. By property [c] above, our control will be based on a state estimate that can be at most Kλ dt away from the true state. Thus, we have: e. If an error event at depth d occurs at time t, the state X(t+T ) is smaller than K λ (d+1)t for some constant K that does not depend on d or t. By property [c], there are no more than 2 dt R possible disjoint false paths that last intersected the true path d stages ago. By the memorylessness of the channel, the log-likelihood of each path is

6 using E orth for the wideband channel. All that remains is to analyze the η-moment by combining [g] and [e]. Fig. 5. Locally, the trellis looks like a tree with the nodes corresponding to the intervals where the state might have been and the levels of the tree correspond to the time. It is not a tree because paths can remerge, but all labels on disjoint paths are chosen so that they are independent of each other. the sum of the likelihood of the prefix of the path leading up to d stages ago and the suffix of the path from that point onward. For a path that is disjoint from the true path at a depth of d to beat all paths that end up at the true final state, the false path must have a suffix log-likelihood that at least beats the suffix log-likelihood of the true path. Property [d] tells us that the channel inputs corresponding to the false paths are pairwise independent of the true inputs for the past dt channel uses in case of the DMC, and orthogonal from the true path in case of the wideband channel. All we require to apply Gallager s random block-coding analysis of Chapter 5 in [4] is such a pairwise independence between the true and false codewords for a code of length dt. Similarly, all we require to apply Gallager s orthogonal coding analysis of Chapter 7 in [4] is such a pairwise orthogonality. So we have: f. The probability that the ML path diverges from the true path at depth d is no more than L2 dt Er(R) in case of the DMC, and no more dt E than L2 orth(r) in case of the wideband channel for some L > 0. Taking a normal union bound over all depths d results in a geometric sum which converges giving us: g. The probability that the ML path diverges from the true path at depth d is no more than K 2 dt Er(R) for some constant K > 0 that does not depend on d or t and similarly E[ X(t + T ) η ] t T (K 2 dt Er(R) )(K λ (d+1)t ) η d=0 < K (K λ T ) η 2 dt Er(R) )λ ηdt d=0 = K (K λ T ) η = K < d=0 2 dt (Er(R) η log 2 λ) where the final geometric sum converges since E r (R) > η log 2 λ. And similarly for E orth in the case of the wideband channel. The decoder analysis given here is for ML decoding. However, the classical analysis of Forney in [2] and [3] suggests that an idea corresponding to sequential decoding for a trellis should function in the decoder/controller. This approximate decoding algorithm will only cost a higher constant factor K in performance since it achieves the same exponents as ML decoding. Using sequential decoding, the computational effort 5 expended at the decoder is now a random variable that does not need to grow with t. Finally, [6] tells us that sequential decoding s computational effort must have a Pareto distribution with an infinite mean as we get close to the channel capacity. However, in the control context, we are probably aiming for a reasonably large η, in which case we will be far enough away from capacity to give us a finite mean for computational effort. IV. COMMENTS AND DISCUSSION It is interesting to consider the actual bandwidth used by the code for the wideband channel. In [8], the refining PPM code actually ended up using 5 The memory use at the decoder/controller will increase with time, as we must store the past channel outputs just in case the sequential decoder needs to backtrack. However, this is not likely to be a problem in modern systems since we could easily store many minutes of channel outputs without trouble. If the decoder ever has to back up that much, the model would have long since broken down for other reasons.

7 an infinite amount of bandwidth in a nontrivial way. However, in the control context, we make the following observation: if the system is stable, the state is steadily returning to the neighborhood of the origin. The channel inputs corresponding to those bins all have low frequencies in them. Thus, the probability of transmitting a signal with a high frequency component in it is bounded in a manner similar to the state. By Markov s inequality, we have: P ( X > x) = P ( X η > x η ) E[ X η ]x η Kx η By the construction of g i,t, we have that: P (f t > f) P (X(t) > ft 2 ) K f η This tells us that if we want the power spectral density 6 of A(t) to die off as 1, then all we need f 2 to do is ensure that the plant state X has a finite second moment. When viewed on a log-log plot, the power spectrum will have a tail that looks roughly linear for the low-power, high frequency part. This suggests that variations of such a control scheme might actually be practical since truly infinite bandwidth is never available in practice. In fact, for control systems involving wireless links in an ultrawideband scenario, the potentially large bandwidth assumption but hard amplitude constraint model used here is probably much more realistic than the usual linear Gaussian model. REFERENCES [1] R. Bansal and T. Basar, Simultaneous Design of Measurement and Control Strategies for Stochastic Systems with Feedback. Automatica, Volume 25, No. 5, pp , [2] G.D. Forney, Jr., Convolutional codes II. Maximumlikelihood decoding. Inform. and Control, vol. 25 pp , [3] G.D. Forney, Jr., Convolutional codes III. Sequential decoding. Inform. and Control, vol. 25 pp , [4] R.G. Gallager, Information Theory and Reliable Communication. New York, NY: John Wiley and Sons, [5] Y.C. Ho, M.P. Kastner, and E. Wong, Teams, Signaling, and Information Theory, IEEE Transactions on Automatic Control, April 1978, pp [6] I.M. Jacobs and E.R. Berlekamp, A lower bound to the distribution of computation for sequential decoding, IEEE Transactions on Information Theory, vol. 13, pp , April [7] A. Sahai, The necessity and sufficiency of anytime capacity for control over a noisy communication link, Proceedings of the 43rd IEEE Conference on Decision and Control, December [8] A. Sahai, Anytime coding on the infinite bandwidth AWGN channel: A sequential semi-orthogonal code Proceedings of the 39th Annual Conference on Information Sciences and Systems, March [9] A. Sahai and S.K. Mitter, The necessity and sufficiency of anytime capacity for control over a noisy communication link: Part I To be submitted to the IEEE Transactions on Information Theory, April [10] H. Witsenhausen, Separation of Estimation and Control for Discrete Time Systems. Proceedings of the IEEE, Vol 59, No. 11, November V. ACKNOWLEDGMENTS The author thanks students Amirali Zohrenejad and Paul S. Liu for their initial simulations and study of the λ = 1 case with random memoryless encoders. 6 No such power spectral density truly exists because the A(t) is non-stationary due to the period T. However, this is only a formality that can be eliminated by considering A(t) as an approximately cyclostationary process and using the appropriate technical machinery.

A Simple Encoding And Decoding Strategy For Stabilization Over Discrete Memoryless Channels

A Simple Encoding And Decoding Strategy For Stabilization Over Discrete Memoryless Channels A Simple Encoding And Decoding Strategy For Stabilization Over Discrete Memoryless Channels Anant Sahai and Hari Palaiyanur Dept. of Electrical Engineering and Computer Sciences University of California,

More information

Universal Anytime Codes: An approach to uncertain channels in control

Universal Anytime Codes: An approach to uncertain channels in control Universal Anytime Codes: An approach to uncertain channels in control paper by Stark Draper and Anant Sahai presented by Sekhar Tatikonda Wireless Foundations Department of Electrical Engineering and Computer

More information

The necessity and sufficiency of anytime capacity for control over a noisy communication link: Parts I and II

The necessity and sufficiency of anytime capacity for control over a noisy communication link: Parts I and II The necessity and sufficiency of anytime capacity for control over a noisy communication link: Parts I and II Anant Sahai, Sanjoy Mitter sahai@eecs.berkeley.edu, mitter@mit.edu Abstract We review how Shannon

More information

Chapter 1 Elements of Information Theory for Networked Control Systems

Chapter 1 Elements of Information Theory for Networked Control Systems Chapter 1 Elements of Information Theory for Networked Control Systems Massimo Franceschetti and Paolo Minero 1.1 Introduction Next generation cyber-physical systems [35] will integrate computing, communication,

More information

On the variable-delay reliability function of discrete memoryless channels with access to noisy feedback

On the variable-delay reliability function of discrete memoryless channels with access to noisy feedback ITW2004, San Antonio, Texas, October 24 29, 2004 On the variable-delay reliability function of discrete memoryless channels with access to noisy feedback Anant Sahai and Tunç Şimşek Electrical Engineering

More information

The connection between information theory and networked control

The connection between information theory and networked control The connection between information theory and networked control Anant Sahai based in part on joint work with students: Tunc Simsek, Hari Palaiyanur, and Pulkit Grover Wireless Foundations Department of

More information

Towards control over fading channels

Towards control over fading channels Towards control over fading channels Paolo Minero, Massimo Franceschetti Advanced Network Science University of California San Diego, CA, USA mail: {minero,massimo}@ucsd.edu Invited Paper) Subhrakanti

More information

Communication constraints and latency in Networked Control Systems

Communication constraints and latency in Networked Control Systems Communication constraints and latency in Networked Control Systems João P. Hespanha Center for Control Engineering and Computation University of California Santa Barbara In collaboration with Antonio Ortega

More information

Coding into a source: an inverse rate-distortion theorem

Coding into a source: an inverse rate-distortion theorem Coding into a source: an inverse rate-distortion theorem Anant Sahai joint work with: Mukul Agarwal Sanjoy K. Mitter Wireless Foundations Department of Electrical Engineering and Computer Sciences University

More information

Delay, feedback, and the price of ignorance

Delay, feedback, and the price of ignorance Delay, feedback, and the price of ignorance Anant Sahai based in part on joint work with students: Tunc Simsek Cheng Chang Wireless Foundations Department of Electrical Engineering and Computer Sciences

More information

Cooperative Communication with Feedback via Stochastic Approximation

Cooperative Communication with Feedback via Stochastic Approximation Cooperative Communication with Feedback via Stochastic Approximation Utsaw Kumar J Nicholas Laneman and Vijay Gupta Department of Electrical Engineering University of Notre Dame Email: {ukumar jnl vgupta}@ndedu

More information

RECENT advances in technology have led to increased activity

RECENT advances in technology have led to increased activity IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 49, NO 9, SEPTEMBER 2004 1549 Stochastic Linear Control Over a Communication Channel Sekhar Tatikonda, Member, IEEE, Anant Sahai, Member, IEEE, and Sanjoy Mitter,

More information

THIS paper is aimed at designing efficient decoding algorithms

THIS paper is aimed at designing efficient decoding algorithms IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 7, NOVEMBER 1999 2333 Sort-and-Match Algorithm for Soft-Decision Decoding Ilya Dumer, Member, IEEE Abstract Let a q-ary linear (n; k)-code C be used

More information

for some error exponent E( R) as a function R,

for some error exponent E( R) as a function R, . Capacity-achieving codes via Forney concatenation Shannon s Noisy Channel Theorem assures us the existence of capacity-achieving codes. However, exhaustive search for the code has double-exponential

More information

Iterative Encoder-Controller Design for Feedback Control Over Noisy Channels

Iterative Encoder-Controller Design for Feedback Control Over Noisy Channels IEEE TRANSACTIONS ON AUTOMATIC CONTROL 1 Iterative Encoder-Controller Design for Feedback Control Over Noisy Channels Lei Bao, Member, IEEE, Mikael Skoglund, Senior Member, IEEE, and Karl Henrik Johansson,

More information

Characterization of Information Channels for Asymptotic Mean Stationarity and Stochastic Stability of Nonstationary/Unstable Linear Systems

Characterization of Information Channels for Asymptotic Mean Stationarity and Stochastic Stability of Nonstationary/Unstable Linear Systems 6332 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 58, NO 10, OCTOBER 2012 Characterization of Information Channels for Asymptotic Mean Stationarity and Stochastic Stability of Nonstationary/Unstable Linear

More information

Capacity-achieving Feedback Scheme for Flat Fading Channels with Channel State Information

Capacity-achieving Feedback Scheme for Flat Fading Channels with Channel State Information Capacity-achieving Feedback Scheme for Flat Fading Channels with Channel State Information Jialing Liu liujl@iastate.edu Sekhar Tatikonda sekhar.tatikonda@yale.edu Nicola Elia nelia@iastate.edu Dept. of

More information

6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011

6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011 6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011 On the Structure of Real-Time Encoding and Decoding Functions in a Multiterminal Communication System Ashutosh Nayyar, Student

More information

Coding for Digital Communication and Beyond Fall 2013 Anant Sahai MT 1

Coding for Digital Communication and Beyond Fall 2013 Anant Sahai MT 1 EECS 121 Coding for Digital Communication and Beyond Fall 2013 Anant Sahai MT 1 PRINT your student ID: PRINT AND SIGN your name:, (last) (first) (signature) PRINT your Unix account login: ee121- Prob.

More information

A proof of the existence of good nested lattices

A proof of the existence of good nested lattices A proof of the existence of good nested lattices Dinesh Krithivasan and S. Sandeep Pradhan July 24, 2007 1 Introduction We show the existence of a sequence of nested lattices (Λ (n) 1, Λ(n) ) with Λ (n)

More information

Encoder Decoder Design for Event-Triggered Feedback Control over Bandlimited Channels

Encoder Decoder Design for Event-Triggered Feedback Control over Bandlimited Channels Encoder Decoder Design for Event-Triggered Feedback Control over Bandlimited Channels LEI BAO, MIKAEL SKOGLUND AND KARL HENRIK JOHANSSON IR-EE- 26: Stockholm 26 Signal Processing School of Electrical Engineering

More information

LECTURE 10. Last time: Lecture outline

LECTURE 10. Last time: Lecture outline LECTURE 10 Joint AEP Coding Theorem Last time: Error Exponents Lecture outline Strong Coding Theorem Reading: Gallager, Chapter 5. Review Joint AEP A ( ɛ n) (X) A ( ɛ n) (Y ) vs. A ( ɛ n) (X, Y ) 2 nh(x)

More information

Relaying Information Streams

Relaying Information Streams Relaying Information Streams Anant Sahai UC Berkeley EECS sahai@eecs.berkeley.edu Originally given: Oct 2, 2002 This was a talk only. I was never able to rigorously formalize the back-of-the-envelope reasoning

More information

Towards a Theory of Information Flow in the Finitary Process Soup

Towards a Theory of Information Flow in the Finitary Process Soup Towards a Theory of in the Finitary Process Department of Computer Science and Complexity Sciences Center University of California at Davis June 1, 2010 Goals Analyze model of evolutionary self-organization

More information

Error Exponent Region for Gaussian Broadcast Channels

Error Exponent Region for Gaussian Broadcast Channels Error Exponent Region for Gaussian Broadcast Channels Lihua Weng, S. Sandeep Pradhan, and Achilleas Anastasopoulos Electrical Engineering and Computer Science Dept. University of Michigan, Ann Arbor, MI

More information

Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi

Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi Lecture - 41 Pulse Code Modulation (PCM) So, if you remember we have been talking

More information

The Poisson Channel with Side Information

The Poisson Channel with Side Information The Poisson Channel with Side Information Shraga Bross School of Enginerring Bar-Ilan University, Israel brosss@macs.biu.ac.il Amos Lapidoth Ligong Wang Signal and Information Processing Laboratory ETH

More information

EE 121: Introduction to Digital Communication Systems. 1. Consider the following discrete-time communication system. There are two equallly likely

EE 121: Introduction to Digital Communication Systems. 1. Consider the following discrete-time communication system. There are two equallly likely EE 11: Introduction to Digital Communication Systems Midterm Solutions 1. Consider the following discrete-time communication system. There are two equallly likely messages to be transmitted, and they are

More information

Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information

Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information 204 IEEE International Symposium on Information Theory Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information Omur Ozel, Kaya Tutuncuoglu 2, Sennur Ulukus, and Aylin Yener

More information

Chapter 7: Channel coding:convolutional codes

Chapter 7: Channel coding:convolutional codes Chapter 7: : Convolutional codes University of Limoges meghdadi@ensil.unilim.fr Reference : Digital communications by John Proakis; Wireless communication by Andreas Goldsmith Encoder representation Communication

More information

Optimal Decentralized Control of Coupled Subsystems With Control Sharing

Optimal Decentralized Control of Coupled Subsystems With Control Sharing IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 58, NO. 9, SEPTEMBER 2013 2377 Optimal Decentralized Control of Coupled Subsystems With Control Sharing Aditya Mahajan, Member, IEEE Abstract Subsystems that

More information

UNIT I INFORMATION THEORY. I k log 2

UNIT I INFORMATION THEORY. I k log 2 UNIT I INFORMATION THEORY Claude Shannon 1916-2001 Creator of Information Theory, lays the foundation for implementing logic in digital circuits as part of his Masters Thesis! (1939) and published a paper

More information

ELEC546 Review of Information Theory

ELEC546 Review of Information Theory ELEC546 Review of Information Theory Vincent Lau 1/1/004 1 Review of Information Theory Entropy: Measure of uncertainty of a random variable X. The entropy of X, H(X), is given by: If X is a discrete random

More information

Encoder Decoder Design for Feedback Control over the Binary Symmetric Channel

Encoder Decoder Design for Feedback Control over the Binary Symmetric Channel Encoder Decoder Design for Feedback Control over the Binary Symmetric Channel Lei Bao, Mikael Skoglund and Karl Henrik Johansson School of Electrical Engineering, Royal Institute of Technology, Stockholm,

More information

Lecture 12. Block Diagram

Lecture 12. Block Diagram Lecture 12 Goals Be able to encode using a linear block code Be able to decode a linear block code received over a binary symmetric channel or an additive white Gaussian channel XII-1 Block Diagram Data

More information

Data Detection for Controlled ISI. h(nt) = 1 for n=0,1 and zero otherwise.

Data Detection for Controlled ISI. h(nt) = 1 for n=0,1 and zero otherwise. Data Detection for Controlled ISI *Symbol by symbol suboptimum detection For the duobinary signal pulse h(nt) = 1 for n=0,1 and zero otherwise. The samples at the output of the receiving filter(demodulator)

More information

Estimating a linear process using phone calls

Estimating a linear process using phone calls Estimating a linear process using phone calls Mohammad Javad Khojasteh, Massimo Franceschetti, Gireeja Ranade Abstract We consider the problem of estimating an undisturbed, scalar, linear process over

More information

Information Structures, the Witsenhausen Counterexample, and Communicating Using Actions

Information Structures, the Witsenhausen Counterexample, and Communicating Using Actions Information Structures, the Witsenhausen Counterexample, and Communicating Using Actions Pulkit Grover, Carnegie Mellon University Abstract The concept of information-structures in decentralized control

More information

CSCI 2570 Introduction to Nanocomputing

CSCI 2570 Introduction to Nanocomputing CSCI 2570 Introduction to Nanocomputing Information Theory John E Savage What is Information Theory Introduced by Claude Shannon. See Wikipedia Two foci: a) data compression and b) reliable communication

More information

A new analytic approach to evaluation of Packet Error Rate in Wireless Networks

A new analytic approach to evaluation of Packet Error Rate in Wireless Networks A new analytic approach to evaluation of Packet Error Rate in Wireless Networks Ramin Khalili Université Pierre et Marie Curie LIP6-CNRS, Paris, France ramin.khalili@lip6.fr Kavé Salamatian Université

More information

Bounded Expected Delay in Arithmetic Coding

Bounded Expected Delay in Arithmetic Coding Bounded Expected Delay in Arithmetic Coding Ofer Shayevitz, Ram Zamir, and Meir Feder Tel Aviv University, Dept. of EE-Systems Tel Aviv 69978, Israel Email: {ofersha, zamir, meir }@eng.tau.ac.il arxiv:cs/0604106v1

More information

Lecture 7. Union bound for reducing M-ary to binary hypothesis testing

Lecture 7. Union bound for reducing M-ary to binary hypothesis testing Lecture 7 Agenda for the lecture M-ary hypothesis testing and the MAP rule Union bound for reducing M-ary to binary hypothesis testing Introduction of the channel coding problem 7.1 M-ary hypothesis testing

More information

Data Rate Theorem for Stabilization over Time-Varying Feedback Channels

Data Rate Theorem for Stabilization over Time-Varying Feedback Channels Data Rate Theorem for Stabilization over Time-Varying Feedback Channels Workshop on Frontiers in Distributed Communication, Sensing and Control Massimo Franceschetti, UCSD (joint work with P. Minero, S.

More information

5958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 12, DECEMBER 2010

5958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 12, DECEMBER 2010 5958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 12, DECEMBER 2010 Capacity Theorems for Discrete, Finite-State Broadcast Channels With Feedback and Unidirectional Receiver Cooperation Ron Dabora

More information

Feedback and Side-Information in Information Theory

Feedback and Side-Information in Information Theory Feedback and Side-Information in Information Theory Anant Sahai and Sekhar Tatikonda UC Berkeley and Yale sahai@eecs.berkeley.edu and sekhar.tatikonda@yale.edu ISIT 27 Tutorial T2 Nice, France June 24,

More information

NAME... Soc. Sec. #... Remote Location... (if on campus write campus) FINAL EXAM EE568 KUMAR. Sp ' 00

NAME... Soc. Sec. #... Remote Location... (if on campus write campus) FINAL EXAM EE568 KUMAR. Sp ' 00 NAME... Soc. Sec. #... Remote Location... (if on campus write campus) FINAL EXAM EE568 KUMAR Sp ' 00 May 3 OPEN BOOK exam (students are permitted to bring in textbooks, handwritten notes, lecture notes

More information

16.36 Communication Systems Engineering

16.36 Communication Systems Engineering MIT OpenCourseWare http://ocw.mit.edu 16.36 Communication Systems Engineering Spring 2009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 16.36: Communication

More information

Introduction to Binary Convolutional Codes [1]

Introduction to Binary Convolutional Codes [1] Introduction to Binary Convolutional Codes [1] Yunghsiang S. Han Graduate Institute of Communication Engineering, National Taipei University Taiwan E-mail: yshan@mail.ntpu.edu.tw Y. S. Han Introduction

More information

Encoder Decoder Design for Event-Triggered Feedback Control over Bandlimited Channels

Encoder Decoder Design for Event-Triggered Feedback Control over Bandlimited Channels Encoder Decoder Design for Event-Triggered Feedback Control over Bandlimited Channels Lei Bao, Mikael Skoglund and Karl Henrik Johansson Department of Signals, Sensors and Systems, Royal Institute of Technology,

More information

Multimedia Communications. Mathematical Preliminaries for Lossless Compression

Multimedia Communications. Mathematical Preliminaries for Lossless Compression Multimedia Communications Mathematical Preliminaries for Lossless Compression What we will see in this chapter Definition of information and entropy Modeling a data source Definition of coding and when

More information

Upper Bounds on the Capacity of Binary Intermittent Communication

Upper Bounds on the Capacity of Binary Intermittent Communication Upper Bounds on the Capacity of Binary Intermittent Communication Mostafa Khoshnevisan and J. Nicholas Laneman Department of Electrical Engineering University of Notre Dame Notre Dame, Indiana 46556 Email:{mhoshne,

More information

Generalized Writing on Dirty Paper

Generalized Writing on Dirty Paper Generalized Writing on Dirty Paper Aaron S. Cohen acohen@mit.edu MIT, 36-689 77 Massachusetts Ave. Cambridge, MA 02139-4307 Amos Lapidoth lapidoth@isi.ee.ethz.ch ETF E107 ETH-Zentrum CH-8092 Zürich, Switzerland

More information

Witsenhausen s counterexample and its links with multimedia security problems

Witsenhausen s counterexample and its links with multimedia security problems Witsenhausen s counterexample and its links with multimedia security problems Pedro Comesaña-Alfaro Fernando Pérez-González Chaouki T. Abdallah IWDW 2011 Atlantic City, New Jersey Outline Introduction

More information

Maximum Likelihood Decoding of Codes on the Asymmetric Z-channel

Maximum Likelihood Decoding of Codes on the Asymmetric Z-channel Maximum Likelihood Decoding of Codes on the Asymmetric Z-channel Pål Ellingsen paale@ii.uib.no Susanna Spinsante s.spinsante@univpm.it Angela Barbero angbar@wmatem.eis.uva.es May 31, 2005 Øyvind Ytrehus

More information

EVALUATION OF PACKET ERROR RATE IN WIRELESS NETWORKS

EVALUATION OF PACKET ERROR RATE IN WIRELESS NETWORKS EVALUATION OF PACKET ERROR RATE IN WIRELESS NETWORKS Ramin Khalili, Kavé Salamatian LIP6-CNRS, Université Pierre et Marie Curie. Paris, France. Ramin.khalili, kave.salamatian@lip6.fr Abstract Bit Error

More information

The Viterbi Algorithm EECS 869: Error Control Coding Fall 2009

The Viterbi Algorithm EECS 869: Error Control Coding Fall 2009 1 Bacground Material 1.1 Organization of the Trellis The Viterbi Algorithm EECS 869: Error Control Coding Fall 2009 The Viterbi algorithm (VA) processes the (noisy) output sequence from a state machine

More information

Performance of small signal sets

Performance of small signal sets 42 Chapter 5 Performance of small signal sets In this chapter, we show how to estimate the performance of small-to-moderate-sized signal constellations on the discrete-time AWGN channel. With equiprobable

More information

392D: Coding for the AWGN Channel Wednesday, January 24, 2007 Stanford, Winter 2007 Handout #6. Problem Set 2 Solutions

392D: Coding for the AWGN Channel Wednesday, January 24, 2007 Stanford, Winter 2007 Handout #6. Problem Set 2 Solutions 392D: Coding for the AWGN Channel Wednesday, January 24, 2007 Stanford, Winter 2007 Handout #6 Problem Set 2 Solutions Problem 2.1 (Cartesian-product constellations) (a) Show that if A is a K-fold Cartesian

More information

Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes

Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes Igal Sason Department of Electrical Engineering Technion - Israel Institute of Technology Haifa 32000, Israel 2009 IEEE International

More information

IN this paper, we consider the capacity of sticky channels, a

IN this paper, we consider the capacity of sticky channels, a 72 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 1, JANUARY 2008 Capacity Bounds for Sticky Channels Michael Mitzenmacher, Member, IEEE Abstract The capacity of sticky channels, a subclass of insertion

More information

Electrical Engineering Written PhD Qualifier Exam Spring 2014

Electrical Engineering Written PhD Qualifier Exam Spring 2014 Electrical Engineering Written PhD Qualifier Exam Spring 2014 Friday, February 7 th 2014 Please do not write your name on this page or any other page you submit with your work. Instead use the student

More information

An instantaneous code (prefix code, tree code) with the codeword lengths l 1,..., l N exists if and only if. 2 l i. i=1

An instantaneous code (prefix code, tree code) with the codeword lengths l 1,..., l N exists if and only if. 2 l i. i=1 Kraft s inequality An instantaneous code (prefix code, tree code) with the codeword lengths l 1,..., l N exists if and only if N 2 l i 1 Proof: Suppose that we have a tree code. Let l max = max{l 1,...,

More information

Source coding and channel requirements for unstable processes. Anant Sahai, Sanjoy Mitter

Source coding and channel requirements for unstable processes. Anant Sahai, Sanjoy Mitter Source coding and channel requirements for unstable processes Anant Sahai, Sanjoy Mitter sahai@eecs.berkeley.edu, mitter@mit.edu Abstract Our understanding of information in systems has been based on the

More information

CS6304 / Analog and Digital Communication UNIT IV - SOURCE AND ERROR CONTROL CODING PART A 1. What is the use of error control coding? The main use of error control coding is to reduce the overall probability

More information

EE 4TM4: Digital Communications II. Channel Capacity

EE 4TM4: Digital Communications II. Channel Capacity EE 4TM4: Digital Communications II 1 Channel Capacity I. CHANNEL CODING THEOREM Definition 1: A rater is said to be achievable if there exists a sequence of(2 nr,n) codes such thatlim n P (n) e (C) = 0.

More information

Finally the Weakest Failure Detector for Non-Blocking Atomic Commit

Finally the Weakest Failure Detector for Non-Blocking Atomic Commit Finally the Weakest Failure Detector for Non-Blocking Atomic Commit Rachid Guerraoui Petr Kouznetsov Distributed Programming Laboratory EPFL Abstract Recent papers [7, 9] define the weakest failure detector

More information

(each row defines a probability distribution). Given n-strings x X n, y Y n we can use the absence of memory in the channel to compute

(each row defines a probability distribution). Given n-strings x X n, y Y n we can use the absence of memory in the channel to compute ENEE 739C: Advanced Topics in Signal Processing: Coding Theory Instructor: Alexander Barg Lecture 6 (draft; 9/6/03. Error exponents for Discrete Memoryless Channels http://www.enee.umd.edu/ abarg/enee739c/course.html

More information

Introduction to Convolutional Codes, Part 1

Introduction to Convolutional Codes, Part 1 Introduction to Convolutional Codes, Part 1 Frans M.J. Willems, Eindhoven University of Technology September 29, 2009 Elias, Father of Coding Theory Textbook Encoder Encoder Properties Systematic Codes

More information

A Summary of Multiple Access Channels

A Summary of Multiple Access Channels A Summary of Multiple Access Channels Wenyi Zhang February 24, 2003 Abstract In this summary we attempt to present a brief overview of the classical results on the information-theoretic aspects of multiple

More information

The Method of Types and Its Application to Information Hiding

The Method of Types and Its Application to Information Hiding The Method of Types and Its Application to Information Hiding Pierre Moulin University of Illinois at Urbana-Champaign www.ifp.uiuc.edu/ moulin/talks/eusipco05-slides.pdf EUSIPCO Antalya, September 7,

More information

On the Throughput, Capacity and Stability Regions of Random Multiple Access over Standard Multi-Packet Reception Channels

On the Throughput, Capacity and Stability Regions of Random Multiple Access over Standard Multi-Packet Reception Channels On the Throughput, Capacity and Stability Regions of Random Multiple Access over Standard Multi-Packet Reception Channels Jie Luo, Anthony Ephremides ECE Dept. Univ. of Maryland College Park, MD 20742

More information

Lecture 4 Noisy Channel Coding

Lecture 4 Noisy Channel Coding Lecture 4 Noisy Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 9, 2015 1 / 56 I-Hsiang Wang IT Lecture 4 The Channel Coding Problem

More information

Anytime Capacity of the AWGN+Erasure Channel with Feedback. Qing Xu. B.S. (Beijing University) 1997 M.S. (University of California at Berkeley) 2000

Anytime Capacity of the AWGN+Erasure Channel with Feedback. Qing Xu. B.S. (Beijing University) 1997 M.S. (University of California at Berkeley) 2000 Anytime Capacity of the AWGN+Erasure Channel with Feedback by Qing Xu B.S. (Beijing University) 1997 M.S. (University of California at Berkeley) 2000 A dissertation submitted in partial satisfaction of

More information

Exact Probability of Erasure and a Decoding Algorithm for Convolutional Codes on the Binary Erasure Channel

Exact Probability of Erasure and a Decoding Algorithm for Convolutional Codes on the Binary Erasure Channel Exact Probability of Erasure and a Decoding Algorithm for Convolutional Codes on the Binary Erasure Channel Brian M. Kurkoski, Paul H. Siegel, and Jack K. Wolf Department of Electrical and Computer Engineering

More information

Modulation & Coding for the Gaussian Channel

Modulation & Coding for the Gaussian Channel Modulation & Coding for the Gaussian Channel Trivandrum School on Communication, Coding & Networking January 27 30, 2017 Lakshmi Prasad Natarajan Dept. of Electrical Engineering Indian Institute of Technology

More information

An analysis of the computational complexity of sequential decoding of specific tree codes over Gaussian channels

An analysis of the computational complexity of sequential decoding of specific tree codes over Gaussian channels An analysis of the computational complexity of sequential decoding of specific tree codes over Gaussian channels B. Narayanaswamy, Rohit Negi and Pradeep Khosla Department of ECE Carnegie Mellon University

More information

Lecture 5 Channel Coding over Continuous Channels

Lecture 5 Channel Coding over Continuous Channels Lecture 5 Channel Coding over Continuous Channels I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw November 14, 2014 1 / 34 I-Hsiang Wang NIT Lecture 5 From

More information

School of Computer and Communication Sciences. Information Theory and Coding Notes on Random Coding December 12, 2003.

School of Computer and Communication Sciences. Information Theory and Coding Notes on Random Coding December 12, 2003. ÉCOLE POLYTECHNIQUE FÉDÉRALE DE LAUSANNE School of Computer and Communication Sciences Handout 8 Information Theor and Coding Notes on Random Coding December 2, 2003 Random Coding In this note we prove

More information

Notes 3: Stochastic channels and noisy coding theorem bound. 1 Model of information communication and noisy channel

Notes 3: Stochastic channels and noisy coding theorem bound. 1 Model of information communication and noisy channel Introduction to Coding Theory CMU: Spring 2010 Notes 3: Stochastic channels and noisy coding theorem bound January 2010 Lecturer: Venkatesan Guruswami Scribe: Venkatesan Guruswami We now turn to the basic

More information

On Scalable Source Coding for Multiple Decoders with Side Information

On Scalable Source Coding for Multiple Decoders with Side Information On Scalable Source Coding for Multiple Decoders with Side Information Chao Tian School of Computer and Communication Sciences Laboratory for Information and Communication Systems (LICOS), EPFL, Lausanne,

More information

Capacity of a Two-way Function Multicast Channel

Capacity of a Two-way Function Multicast Channel Capacity of a Two-way Function Multicast Channel 1 Seiyun Shin, Student Member, IEEE and Changho Suh, Member, IEEE Abstract We explore the role of interaction for the problem of reliable computation over

More information

EECS 229A Spring 2007 * * (a) By stationarity and the chain rule for entropy, we have

EECS 229A Spring 2007 * * (a) By stationarity and the chain rule for entropy, we have EECS 229A Spring 2007 * * Solutions to Homework 3 1. Problem 4.11 on pg. 93 of the text. Stationary processes (a) By stationarity and the chain rule for entropy, we have H(X 0 ) + H(X n X 0 ) = H(X 0,

More information

Stability Analysis and Synthesis for Scalar Linear Systems With a Quantized Feedback

Stability Analysis and Synthesis for Scalar Linear Systems With a Quantized Feedback IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 48, NO 9, SEPTEMBER 2003 1569 Stability Analysis and Synthesis for Scalar Linear Systems With a Quantized Feedback Fabio Fagnani and Sandro Zampieri Abstract

More information

Appendix D: Basics of convolutional codes

Appendix D: Basics of convolutional codes Appendix D: Basics of convolutional codes Convolutional encoder: In convolutional code (B. P. Lathi, 2009; S. G. Wilson, 1996; E. Biglieri, 2005; T. Oberg, 2001), the block of n code bits generated by

More information

Optimal matching in wireless sensor networks

Optimal matching in wireless sensor networks Optimal matching in wireless sensor networks A. Roumy, D. Gesbert INRIA-IRISA, Rennes, France. Institute Eurecom, Sophia Antipolis, France. Abstract We investigate the design of a wireless sensor network

More information

On Optimal Coding of Hidden Markov Sources

On Optimal Coding of Hidden Markov Sources 2014 Data Compression Conference On Optimal Coding of Hidden Markov Sources Mehdi Salehifar, Emrah Akyol, Kumar Viswanatha, and Kenneth Rose Department of Electrical and Computer Engineering University

More information

Shannon meets Wiener II: On MMSE estimation in successive decoding schemes

Shannon meets Wiener II: On MMSE estimation in successive decoding schemes Shannon meets Wiener II: On MMSE estimation in successive decoding schemes G. David Forney, Jr. MIT Cambridge, MA 0239 USA forneyd@comcast.net Abstract We continue to discuss why MMSE estimation arises

More information

On the Secrecy Capacity of the Z-Interference Channel

On the Secrecy Capacity of the Z-Interference Channel On the Secrecy Capacity of the Z-Interference Channel Ronit Bustin Tel Aviv University Email: ronitbustin@post.tau.ac.il Mojtaba Vaezi Princeton University Email: mvaezi@princeton.edu Rafael F. Schaefer

More information

Intermittent Communication

Intermittent Communication Intermittent Communication Mostafa Khoshnevisan, Student Member, IEEE, and J. Nicholas Laneman, Senior Member, IEEE arxiv:32.42v2 [cs.it] 7 Mar 207 Abstract We formulate a model for intermittent communication

More information

CHAPTER 8 Viterbi Decoding of Convolutional Codes

CHAPTER 8 Viterbi Decoding of Convolutional Codes MIT 6.02 DRAFT Lecture Notes Fall 2011 (Last update: October 9, 2011) Comments, questions or bug reports? Please contact hari at mit.edu CHAPTER 8 Viterbi Decoding of Convolutional Codes This chapter describes

More information

Variable Length Codes for Degraded Broadcast Channels

Variable Length Codes for Degraded Broadcast Channels Variable Length Codes for Degraded Broadcast Channels Stéphane Musy School of Computer and Communication Sciences, EPFL CH-1015 Lausanne, Switzerland Email: stephane.musy@ep.ch Abstract This paper investigates

More information

Lecture 7 September 24

Lecture 7 September 24 EECS 11: Coding for Digital Communication and Beyond Fall 013 Lecture 7 September 4 Lecturer: Anant Sahai Scribe: Ankush Gupta 7.1 Overview This lecture introduces affine and linear codes. Orthogonal signalling

More information

BASICS OF DETECTION AND ESTIMATION THEORY

BASICS OF DETECTION AND ESTIMATION THEORY BASICS OF DETECTION AND ESTIMATION THEORY 83050E/158 In this chapter we discuss how the transmitted symbols are detected optimally from a noisy received signal (observation). Based on these results, optimal

More information

Trellis Coded Modulation

Trellis Coded Modulation Trellis Coded Modulation Trellis coded modulation (TCM) is a marriage between codes that live on trellises and signal designs We have already seen that trellises are the preferred way to view convolutional

More information

RADIO SYSTEMS ETIN15. Lecture no: Equalization. Ove Edfors, Department of Electrical and Information Technology

RADIO SYSTEMS ETIN15. Lecture no: Equalization. Ove Edfors, Department of Electrical and Information Technology RADIO SYSTEMS ETIN15 Lecture no: 8 Equalization Ove Edfors, Department of Electrical and Information Technology Ove.Edfors@eit.lth.se Contents Inter-symbol interference Linear equalizers Decision-feedback

More information

AN INFORMATION THEORY APPROACH TO WIRELESS SENSOR NETWORK DESIGN

AN INFORMATION THEORY APPROACH TO WIRELESS SENSOR NETWORK DESIGN AN INFORMATION THEORY APPROACH TO WIRELESS SENSOR NETWORK DESIGN A Thesis Presented to The Academic Faculty by Bryan Larish In Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy

More information

Lecture 6: Expander Codes

Lecture 6: Expander Codes CS369E: Expanders May 2 & 9, 2005 Lecturer: Prahladh Harsha Lecture 6: Expander Codes Scribe: Hovav Shacham In today s lecture, we will discuss the application of expander graphs to error-correcting codes.

More information

Lecture 4 Channel Coding

Lecture 4 Channel Coding Capacity and the Weak Converse Lecture 4 Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 15, 2014 1 / 16 I-Hsiang Wang NIT Lecture 4 Capacity

More information

Stochastic Stability and Ergodicity of Linear Gaussian Systems Controlled over Discrete Channels

Stochastic Stability and Ergodicity of Linear Gaussian Systems Controlled over Discrete Channels Stochastic Stability and Ergodicity of Linear Gaussian Systems Controlled over Discrete Channels Serdar Yüksel Abstract We present necessary and sufficient conditions for stochastic stabilizability of

More information

Performance of Round Robin Policies for Dynamic Multichannel Access

Performance of Round Robin Policies for Dynamic Multichannel Access Performance of Round Robin Policies for Dynamic Multichannel Access Changmian Wang, Bhaskar Krishnamachari, Qing Zhao and Geir E. Øien Norwegian University of Science and Technology, Norway, {changmia,

More information