A vector version of Witsenhausen s counterexample: Towards convergence of control, communication and computation

Size: px
Start display at page:

Download "A vector version of Witsenhausen s counterexample: Towards convergence of control, communication and computation"

Transcription

1 A vector version of Witsenhausen s counterexample: Towards convergence of control, communication and computation Pulkit Grover and Anant Sahai Wireless Foundations, Department of EECS University of California at Berkeley, CA-9470, USA {pulkit, sahai}@eecs.erkeley.edu Astract At its core, the renowned Witsenhausen s counterexample contains an implicit communication prolem. Consequently, we argue that the counterexample provides a useful conceptual ridge etween distriuted control and communcation prolems. Inspired y the success in studying long lock lengths in information theory, we consider a vector version of the Witsenhausen counterexample. For this example, information-theoretic arguments relating to lossy compression, channel coding, and dirty-paper-coding are used to show the existence of nonlinear encoding-decoding control strategies that outperform optimal linear laws and have the ratio of costs go to infinity asymptotically in the vector-space dimension over a much roader range of cost parameters than the previous scalar examples. The vector example is then in turn viewed as a collection of scalar random variales with a four-phase distriuted control strategy. First a set of agents make oservations and communicate with each other to coordinate a first-stage control strategy, then they individually act on their state. A second set of agents now make noisy oservations and communicate to coordinate a control strategy, and finally they act on the state again. The vector case can e considered one in which the first and third phase are free. It is thus natural to impose a cost on the length of the first and third phases and this can in turn e viewed as inducing a natural cost function on the information pattern itself. Inspired y this, we close y considering the simplest possile information-theoretic analog of the prolem lossless compression of a inary state vector. It turns out that the informationpattern can e used as a natural proxy for computational complexity and this gives a new result on the fundamental complexity of lossless compression in terms of the tradeoff etween rate, effort, and the proaility of error. I. INTRODUCTION For LQG systems with perfectly classical information patterns, it was well known that control laws affine in the oservation are optimal. In [], Witsenhausen gave an explicit counterexample that demonstrated the importance of information patterns in control prolems. The counterexample was a chosen distriuted control system and hence a system with a non-classical information pattern that was otherwise quadratic and Gaussian. For this system, Witsenhausen provided a nonlinear control law that outperformed the optimal linear control law and also demonstrated that a measurale optimal control law should exist. The counterexample has inspired a large volume of research along three related themes. The first ody of work is devoted to finding the elusive optimal control law for the prolem. For the simplicity with which the prolem is stated, it is interesting to note that the optimal control law is still unknown. In [], a discrete version of the prolem is introduced. This allows for a convex formulation over a set of complicated constraints. However, in [3], the discrete version was shown to e NP complete. In search of an optimal law, a sequence of results were otained in amongst other works [4] [6] using tools from information theory, neural networks and stochastic optimization respectively. Since the prolem is nonconvex, this work has also inspired numerical methods for solving nonconvex prolems. The second theme is in refining the classification of distriuted LQG systems into those for which affine laws are optimal, and those for which affine laws are not optimal. In [4], the authors consider a parametrized family of twostage stochastic control prolems. The family includes the Witsenhausen counterexample. The authors show that whenever the cost function does not contain a product of two decision variales, affine control laws are optimal. The authors use results from information theory to arrive at the optimality result. In [7], the author shows that affine controls are still optimal for a deterministic variant of the Witsenhausen counterexample if the cost function is the induced two-norm instead of the expected two-norm in the stochastic variant. The third theme has een in viewing the counterexample as a ridge etween control and communication [8]. In [9], the authors oserve that the original Witsenhausen prolem is in essence a communication prolem etween the two controllers. They ack up this oservation y proposing control strategies that are explicitly ased on quantization of the initial state. The strategy is conceptually related to Tomlinson- Harashima precoding see e.g. [0, Pg.454] for what is called dirty-paper coding in information theory. The authors then generate a sequence of prolem parameters for which nonlinear strategies ased on quantization outperform the optimal linear strategies y a factor that tends to infinity. This work inspired a larger ody of work that considered explicit rather than implicit communication channels connecting the two controllers and took asymptotics in time [] [6] and even the idea of implicit communication plays a vital role in [7], [8]. The counterexample itself was revisited yet again in [9] where the author adapts the standard information-theoretic concern with side-information into a modified Witsenhausan

2 prolem. The side-information of the initial state is passed through a noisy AWGN channel efore eing received y the second controller, and is itself suject to an SNR constraint. The author shows that nonlinear schemes still outperform linear ones. In fact, at low SNR, nonlinear schemes that do not make use of the side-information outperform all linear ones, including those that make use of the side information. It can e argued that the root of all these connections etween information theory and control can e traced ack to Witsenhausen s counterexample. It might seem that the exploration of connections etween information theory and control is mature and no longer needs to consider the counterexample as a ridge. In this work, we challenge that view y returning to the Witsenhausen counterexample. We investigate how tools in information theory, specifically the use of asymptotically long lock lengths, can contriute towards improving our understanding of the counterexample. In Section II, we state the vector version of Witsenhausen prolem. Assuming the vector length is asymptotically large, in Section III we propose a pair of nonlinear schemes uilding on the scalar quantization ideas introduced in [9]. The first scheme is ased on the information theoretic concepts of lossy compression vector quantization and joint source-channel coding. The second is inspired y dirty-paper coding [0]. We show that the proposed schemes outperform all affine schemes as well as the scalar scheme of [9]. The new control schemes as proposed treat the entire vector all at once. While usually accepted without question in information-theoretic circles, this seems aphysical in the context of distriuted control. The vector example can e viewed as a distriuted collection of scalar random variales with a four-phase distriuted control strategy. First a set of agents make oservations and communicate with each other to coordinate a first-stage control strategy, then they act on the state, a second set of agents now make noisy oservations and communicate to coordinate a second-stage control strategy, and finally act on the state again. It is thus natural to impose a cost on the length of the first and third phases and this can in turn e viewed as inducing a natural cost function on the information pattern itself. In Section IV, we oserve that the system is a collection of scalar Witsenhausen prolems, with an additional freedom that controllers can send messages iteratively to each other in order to perform the encoding at time, and decoding at time. The Witsenhausen counterexample thus leads naturally to a new information-theoretic prolem of understanding the complexity of distriuted lossy compression. Building on our work in [], we formulate a toy lossless source-coding prolem to explore the tradeoff etween various costs for operating such a distriuted system. This paper does not represent the end of a story, ut rather an attempt to demonstrate that the Witsenhausen counterexample still has plenty of life left in it even after 40 years of providing inspiration to control researchers. II. THE PROBLEM: DISTRIBUTED CONTROL We generalize the scalar Witsenhausen prolem to a vector case. The system is still a two-step control system. The states and the inputs are now vectors of length m. A vector is represented in old font, with the superscript used to denote a vector length e.g. x m. As in conventional notation, x is used to denote states, u the input, and y the oservation. The state x m 0 is distriuted N0, σ 0I. The state transition functions : x m f x m 0,u m x m 0 + u m, and x m f x m,um xm um. The output equations: y m g x m 0 x m 0, and y m g x m xm + wm, where w N0, σ wi. We assume that σ w < σ 0. The cost expressions: h x m,um m k u m, and h x m,u m m xm. The cost expressions are normalized y the vector-length, so that they do not grow with the prolem size. The information patterns : Y {y m }; U, Y {y m }; U. Oserve that the first controller as assumed to have complete knowledge of y m, and similarly the second controller has complete knowledge of y m. Therefore the system is not completely distriuted. Section IV shows that there are computational costs associated with making the system completely distriuted. In the next section we provide a pair of nonlinear schemes that outperform the optimal linear scheme. III. THE SCHEMES: CONTROL AND COMMUNICATION In this section we provide a nonlinear coding scheme that is ased on the concept of joint source-channel coding in information theory. To enale understanding of the scheme, we review some fundamental results and definitions from information theory in Appendix I. These are taken from [], and the reader is referred to [] for further details. A. The first joint-source channel scheme We now riefly descrie the scheme, efore giving a detailed description and analyzing its performance. As in [9], the idea is to quantize the space of realizations of x m 0 to arrive at xm. These points are chosen carefully so that with high proaility, the second controller can recover x m from the noisy oservation y m. By making um x m, the second controller can now force x m, and hence the second cost, to zero. In the vector case, for a careful choice of points, the proaility of error in recovering x m converges to zero exponentially in m [3]. Therefore, for large enough m, the average cost at time can e made as small as desired.

3 3 At time, the state of the system is x m x m 0 + um. We use the following construction to find u m for each xm. First, we design a rate-r source code for distortion D where σ0 > D > σ w. A random codeook is constructed, with each codeword drawn randomly from distriution N0, σd I for σd σ 0 D. If the code rate R satisfies R RD σ log 0 D, the average distortion is no greater than D in the limit. The choice of u m is the distortion ˆx m 0 xm 0, and the resulting x m ˆxm 0. Thus x m, which is the quantized x m, is itself transmitted across the channel. Since a random Gaussian codeook achieves the channel capacity [] for an average power constraint equal to the average power of the codeook, the points in the codeook form a good channel code as well. Since these codewords are generated N0, σd I, the average power of the codeook is σd σ 0 D. Therefore, x m can e recovered relialy at the second controller for rates R < C where C log + σ D log + σ 0 D. Simplifying the capacity expression, C log + σ 0 D σ log w + σ0 D σ log 0 D σ w D D σ log 0 RD D where the last inequality uses the fact that D >. Therefore, reliale communication is possile at rate RD < R < C. Since D is the mean-square distortion m E[ x m xm 0 ], it is also the mean-square input required to drive x m 0 to x m. Therefore, the cost at time is k D. Oserve that D is only constrained y the inequality D >. Asymptotically, therefore, the first stage cost is k. Since the error proaility converges to zero exponentially in m, for large enough m, the average cost at second stage can e made as close to zero as desired. Therefore, the asymptotic total cost is just k. B. Another information theoretic scheme Note that in the aove scheme, the cost at the second stage is zero. Dirty-paper coding [0] suggests another scheme for which the second stage cost is not zero. Oserve that the lossy source code reduces the power that is fed into the channel. This imposes a constraint of D > σ w. Alternatively, dirty-paper coding techniques [0] in information theory can e thought of as performing a similar quantization, without reducing the power. This suggests If a > > 0, then a x x > a for all 0 < x <. that dirty-paper schemes might perform etter than the joint source channel scheme. We refer the reader to Costa s original paper [0] for more details. We also oserve that due to a different prolem formulation, our notation is different from that in [0]. The scheme proceeds y choosing an auxiliary random variale V N0, P + α σ0, for some α that will e an optimization parameter. M nt iid sequences are drawn uniformly at random from the set of typical v m, where T P + σ log 0 + P + α σ0 Pσ0 α + P + σ 0. 3 These sequences are then distriuted uniformly over nr ins. A particular in is chosen 3. The encoding is now performed as follows. Given a source sequence x m 0, a vm jointly typical with x m 0 is first found in the chosen in. Then the control um is chosen as u m v m αx m 0. The received sequence y m is, therefore y m u m + x m 0 + w m. 4 It is shown in [0] that the decoder in our case the second controller, can recover v m from the received sequence as long as the rate R is smaller than Cα, P log PP + σ 0 + σ w Pσ 0 α + σ wp + α σ 0. 5 We are not interested in getting a high rate. However, we want to keep P small, since k P is our cost at time. At time, the cost is the average mean-square error in estimating u m +x m 0. The decoder can recover v m with aritrarily high proaility. Now, v m u m +αx m 0. By design, u m and x m 0 act as if drawn independently. Therefore, we can find the error in estimating u m + xm 0 from vm y MMSE estimation. This turns out to e MSE Pσ 0 α P + α σ0. 6 The total cost is, therefore, k P + Pσ 0 α P + α σ0. 7 This cost can e achieved only if Cα, P in 5 is greater than 0. Thus, the optimal cost is otained y minimizing 7 under the constraint that Cα, P > 0. Consider α. In this case, the MSE cost is zero, so only the first cost is retained. Also, C, P log PP + σ 0 + σ w σ w P + σ 0, 8 which is strictly positive at P. Therefore, zero second cost is possile for some values of P < for this scheme. Since the MSE cost is zero, the net cost is smaller than. Notice that this was not possile for the joint source-channel scheme, where the cost D is constrained to e greater than. M corresponds to the mutual information etween V and Y [0]. 3 Eventually we will let R 0, so there s no loss in choosing any particular in.

4 4 C. Comparison with linear and scalar schemes In this section, we compare the vector scheme with the optimal linear scheme, and the scalar nonlinear schemes in [9]. For simplicity, assume σ w. For given value of σ 0, the cost for the optimal linear scheme is from [9] inf a k a σ0 + + a σ0 + + a σ0. 9 Since, the asymptotic cost for the vector joint source channel coding ased scheme is k k. The ratio of the optimal linear cost to the cost for the joint source channel scheme is, therefore, k a σ0 inf + +a σ0 ++a σ0 a k inf a a σ0 + + a k + + a σ 0 Now let k 0 and σ 0. If a is close to 0, the second term is unounded. If a is close to, the first term gets unounded. For any other value of a, oth terms are unounded. Thus any choice of sequence k, σ 0 such that k 0 and σ 0, the ratio diverges to infinity. Oserve that there is more flexiility in choice of k, σ 0 as compared to that in [9], where a careful choice has een made. The three schemes, viz. the optimal linear scheme and the two vector nonlinear schemes proposed here are compared in Fig. and Fig.. In Appendix II, we show that the proposed scheme outperforms the scalar nonlinear scheme in [9] y a factor of infinity as well. This is also evident from Fig.. cost σ 4 JSCC DPC Optimal linear k Fig.. The figure shows the variation of the total cost with k for σ 4. The Joint Source-Channel JSCC Scheme and the Dirty-Paper CodingDPC scheme perform etter at low values of k. At large k, however, the cost at time is larger, therefore the costs for DPC and JSCC schemes increase. However, the cost for linear schemes is still ounded y y choice of a 0. IV. REDISTRIBUTING THE VECTOR CASE: FROM CONTROL BACK TO COMMUNICATION AND COMPUTATION The schemes in Section III raise a natural question: what does it cost to implement such a scheme in a distriuted control Cost lower ound on scheme in [9] JSCC DPC Linear n Fig.. This figure shows the variation of cost on a log-log scale with n, where n is the parameter that characterizes the family of control prolems in [9]. Thus, k n n, σ 0,n n, and for the scheme in [9], the size of in B n n. A lower ound on cost for this scheme is derived in Appendix II. Since slopes for DPC and JSCC costs are etter than that for a lower ound on scheme in [9], the ratio of costs for the scheme in [9] and these schemes converges to infinity. context? While the limit of large lock-lengths is justified in information theory y considering it as introducing longer and longer end-to-end delay in an inherently centralized communication prolem [4], this is prolematic in a distriuted control setting where a longer vector seems to suggest a distriuted controller acting over a larger geographical area. It is natural to consider the vector as made up of m scalar Witsenhausen prolems. Therefore, a centralized system might e required to perform the encoding, and another for decoding, which is contrary to the spirit of the counterexample. In order to address this, we allow for iterative message-passing efore the actions of oth sets of distriuted controllers. Messagepassing algorithms can e performed in a distriuted manner, and have complexity that scales linearly with l m, where l is the numer of iterations performed, and m is the lock-length. Therefore, the normalized computational cost is only linear in l, and does not scale with lock-length m. In addition, the success of sparse-graph codes in coding-theoretic literature, and success of channel coding and source coding techniques ased on sparse-graphs gives hope that these codes may exist [5]. Next we descrie the encoding and decoding model of this message-passing algorithm. An investigation into costs for the full joint lossy source-channel coding prolem posed here is hard, and we are still working on it as it is a new prolem in information theory. Based on results in [], one expects to see some fundamental performance-cost tradeoffs. The schemes proposed in Section III are implemented in a couple of steps. In the first step, a quantization of R m is performed. The resulting quantization, can e thought of as a lossy source code, is transmitted across the channel. This is followed y a channel decoding, that fails with an error proaility that converges to zero exponentially fast in m. The performancecost tradeoffs for channel coding can e understood from

5 5 the ideas in []. However, the prolem of performance-cost tradeoffs in lossy source coding is entirely new. To egin to understand what such tradeoffs could e for source coding, instead of a Gaussian source we analyze a inary source, which has the advantage of discrete alphaet. In information theory prolems, lossless source coding is generally easier to understand than lossy source coding. Therefore, we restrict our attention here to lossless source coding. Since a inary source that produces 0 or with equal proaility is incompressile losslessly, we consider asymmetric inary sources that produce a with proaility p < 0.5. A. The encoding/decoding model We now descrie a message passing model of the encoder and the decoder. The model is inspired y distriuted Witsenhausen counterexample. We focus on the distriuted nature of the encoding and the decoding. We assume that the encoder is physically made of computational nodes that have communication links with other nodes in the encoder. A suset of nodes are designated source nodes in that each is responsile for storing the value of a particular source symol in the initial state x m 0. Another suset of nodes, called the coded nodes has memers that are would eventually store the encoded symols u m. There may e additional computational nodes that are just there to help encode. To arrive at u m, the encoding is performed in an iterative, distriuted manner. At the start, each of the source nodes is first initialized with one element of the vector x m 0. In each susequent iteration, all the nodes send messages to the nodes that they are connected to. At the end of l e encoder iterations, the values stored in the coded nodes constitute the encoded symols u m. The implementation technology is assumed to dictate that each computational node is connected to at most α + > other nodes. No other restriction is assumed on the topology of the decoder. No restriction is placed on the size or content of the messages except for the fact that they must depend on the information that has reached the computational node in previous iterations. If a node wants to communicate with a more distant node, it has to have its message relayed through other nodes. The neighorhood size of each node at the encoder after l e iterations, which is the numer of nodes it has communicated with, is denoted y n e α le+. The per-node cost associated with the numer of iterations is some function φl, that is increasing with l. The decoding model is analogous, with reconstruction nodes responsile for storing the reconstructed symols, and another suset of nodes that store the encoding symols u m. B. Derivation of lower ound on complexity for lossless source coding The source generates m symols that are encoded losslessly into k symols at rate R k m > h p. The encoding and decoding are performed iteratively using a message passing algorithm. Encoding is performed in l e encoder iterations, and the decoding is performed in l d decoder iterations. Reconstruction of each it is performed y using messages from at most α l d output its. Each of these output its depends on at most α le source its. Therefore, each reconstruction is ased on a neighorhood of α le+l d source symols See Fig. 3. We refer to this as the source neighorhood of the particular symol. Intuitively, an atypical source realization for this local neighorhood of the reconstruction it should cause errors in the reconstruction. Source its Coded its X Recovered its X X X X X X X X^3 X^ X^ 4 Fig. 3. The dashed ox in the figure shows the source neighorhood on one iteration of encoding and decoding for reconstruction it ˆx 3. Whether the reconstruction is in error depends only on the source realization in the neighorhood. The following theorem gives a lower ound on error proaility for given size of local neighorhood n. Turned around, these ounds give lower ounds on n, and hence the total numer of iterations at the encoder and the decoder l e + l d log α n for given error proaility. Theorem : Consider a inary source P that generates iid Bernoullip symols, p < 0.5. Let n e the maximum size of source neighorhood for each reconstructed it. Then the following lower ound holds on the average proaility of it error ph ǫ n δg p g P e P sup ndg p, h R<g g p 0 where h is the inary entropy function, Dg p g log g p + glog g p, δg h g R, ǫ Kg log ph. δg and Kg g g log. 3 g oof: See Appendix III. We note that the lower ound in Theorem results look much like that in []. Conceptually, the two prolems differ only in their source of randomness and the neighorhood. Oserve that the neighorhood here is determined y the numer of encoding and decoding operations. This suggests that the encoding costs can e reduced y making the decoding costs larger. We elieve this is an artifact of our ounding technique, and is not fundamental to the prolem at hand.

6 6 C. Tradeoff etween control, communication, and computation costs In Section III, we determined the communication and control costs for the system. The decentralized encoding and decoding framework aove allows us to calculate the computation costs. Let gap R h p to denote the gap from optimality for the lossless source coding prolem aove. For extremely low error proailities, analogous to results in [], we get the following approximate lower ound on the neighorhood size as a function of the error proaility and the gap. P e log n K gap, 4 for some constant K that does not depend on gap and P e. This lower ound implies that for low computational complexity, the rate R should e at a finite gap from h p. This suggests that for the joint source-channel scheme proposed in Section III, a similar result could hold. That is, to reduce the computational costs, the rate should e ounded away from RD for distortion D. Oserve that a similar result holds for gap from the channel capacity []. Therefore, for optimal costs, the system should e operated at rate C > R > RD, where R is at a finite gap from oth C and RD. Such an operating point requires that for chosen rate R, the distortion D e strictly larger than DR > σ w, thus leading to higher costs at time than those estimated in Section III. APPENDIX I SOME USEFUL INFORMATION THEORETIC CONCEPTS A. Lossy source coding Assume that we have a source that produces sequence x m X m. The encoder descries the source sequence x m y an index f m x m {,,..., nr }. The decoder represents x m y an estimate ˆx m ˆX m. Definition : A distortion function or distortion measure is a mapping d : X ˆX R + 5 from the set of source alphaet-reproduction alphaet pairs into the set of non-negative real numers. The distortion dx, ˆx is a measure of the cost of representing the symol x y the symol ˆx. Definition : The distortion etween sequences x m and ˆx m is defined y dx m,ˆx m m dx i, ˆx i 6 n i Definition 3: A mr, m rate distortion code consists of an encoding function, f m : X m {,,..., mr } 7 and a decoding reproduction function, g m : {,,..., mr } ˆX m. 8 The distortion associated with the mr, m code is defined as D E[dx m, g m f m x m ] 9 where the expectation is with respect to the proaility distriution on x. Definition 4: A rate distortion pair R, D is said to e achievale if there exists a sequence of mr, m rate distortion codes f m, g m with lim m E[dx m, g m f m x m ] D. The rate-distortion function RD is the infimum of rates R such that R, D is achievale for a given distortion D. The distortion-rate function DR is the inflmum of all distortions D such that R, D achievale for a given rate R. Theorem RD for Gaussian source: The ratedistortion function for Gaussian source N0, σ0 with squared-error distortion is { log σ 0 D, 0 D σ 0 RD 0 0, D > σ0. The proof of this theorem tells us that this codeook can e constructed y choosing nr points independently from N0, σ0 DI distriution. B. Channel coding Definition 5: An Additive White Gaussian Noise AWGN channel with an average power constraint consists of a channel input X R and a channel output Y X + Z, where Z N0,. The input X has an average power constraint P, m that is, over m channel uses, m i X i P Definition 6: An M, m code for the AWGN channel consists of the following: An index set {,,..., M}. An encoding function X m : {,,..., M} R m, yielding codewords X m,x m,...,x m M. The set of codewords is called the codeook. 3 A decoding function g : R m {,,..., M}, which is a deterministic rule which assigns a guess to each possile received vector. Definition 7 oaility of error: Let λ i gy m i X m X m i e the conditional proaility of error given that i was sent. The average proaility of error is defined as P m e M M λ i, i and the maximal proaility of error is defined as λ m max λ i 3 i {,,...,M} Definition 8: A rate R is said to e achievale if there exists a sequence of mr, m codes such that the maximal proaility of error λ m 0 as n. Definition 9: The capacity of a memoryless channel is the supremum of all achievale rates. Theorem 3 Channel coding theorem: The capacity for an additive white Gaussian noise channel of noise variance with an average power constraint P is C log + P 4

7 7 In addition, the error proaility converges to zero exponentially in m [3], and the capacity can e achieved y a choosing a codeook of mr points independently from N0, PI distriution. APPENDIX II PERFORMANCE COMPARISON WITH SCALAR SCHEME IN [9] For the family of prolems and the quantization scheme in [9], we find lower ounds on the cost at time. We follow the notation of [9] in this section. B 0 is used to denote the 0-th in in that includes the origin, and B is the in-size. cost k E [ γ B x 0 ] E [ x ] 0 {B 0 } πσ B/ B/ x e x /σ dx. For the particular sequence of prolem parameters n in [9], the size of n th in is B n n, σn n and k n n. Therefore, cost n k n πn 4 πn 4 n/ n/ n/ n/ x e x /n 4 dx x e n /8n 4 dx n/ x dx e /8n πn 4 0 n π e /8n, that increases to infinity as n. In comparison, the joint source-channel scheme proposed in Section III has cost of kn. Thus the ratio costn k for the joint source-channel scheme. n Hence, the ratio of the costs for the scalar scheme in [9] and the vector scheme proposed here diverges to infinity. APPENDIX III PROOF OF LOWER BOUND ON COMPLEXITY FOR LOSSLESS SOURCE CODING The proof is similar to that for the rate-complexity tradeoff for channel coding over a BSC []. In the following, we use P to denote the underlying source that generates symols distriuted Bernoullip. G denotes a test source generating symols Bernoullig. We use x n to denote the proaility of a sequence of length n under source ehavior P. P e,i P, denote the error proaility of i th source it conditioned on it eing. Similar notation is used for channel G, conditioning on it. P e,i P denotes the average error proaility P e P m P e,i m P 5 P i We proceed y a sequence of Lemmas. Lemma Lower ound on P e under test source G: Consider a test source G that generates iid inary symols distriuted Berg. If a rate R code is used for lossless coding of G with R < h g, then average proaility of it-error is lower ounded y P e G h h g R 6 oof: For R < h g, consider hamming distortion etween the source symols and the reconstruction. The average hamming distortion is also the average error proaility. Using the distortion-rate function for a inary source [, Pg. 343], the distortion DR is ounded elow y DR h h g R 7 Let x m denote the source sequence. Consider the i th message it x i. Its encoding and decoding are ased on a particular neighorhood of source symols x n nd,i of size n. The encoding is error-free if these neighorhood source symols x n nd,i lie in the region D i,0 if the i th it is 0, and in D i, if the i th it is. Lemma : Let A e a set of source sequences x n such that A δ. Then, G A fδ 8 P where fy y ǫy n p g ndg p 9 g p is a convex- increasing function of y, and where ǫy Kg log, 30 y oof: Define typical set T ǫ,g as follows n T ǫ,g {x n s.t. s i ng ǫ n} 3 i Then, as shown in [, Lemma 9], for ǫ Kg log A, 3 G A T ǫ,g c G. 33 G That Kg is as in 3 is derived in [6, op. 4.]. Now, under test source G, G xn A G xn x n A c G xn + G xn x n A c Tǫ,G c G xn + Choosing ǫ as in 3, it follows that G xn A G x n T c ǫ,g G xn. 34

8 8 Let n x n e the numer of ones in x n. Then, A P P xn x n A c P xn P xn G xn P xn pnxn pn nxn g n x n A c x n g n n x n G xn T ǫ,g pnxn pn nxn g n x n A c x n g n n x n G xn T ǫ,g pn g n pn g n pn g n p g g g nx n p g g g G xn ng+ǫ n p g g g G xn ng+ǫ n ǫ n p g ndg p G A. g g G xn The function f otained is the same as that in [, Lemma 8] for the case of rate-complexity tradeoffs for channel coding over a BSC. Therefore, the proof of convexity and monotonicity of f are the same as that of [, Lemma 8]. The Lemma then follows from the monotonicity. Now, to complete the proof of Theorem, note that P e P p P e P, + p P e P,0. Conditioned on x i, choose A D i,0 in Lemma. Then, P e,i P, f P e,i G,. 35 Summing from i,,..., m, diving y m, and using convexity of f, P e P, m f P e,i m G, f P e G,. 36 Similarly, Thus, i P e P,0 f P e G,0. P e P p P e P, + p P e P,0 pf P e G, + pf P e G,0 f p P e G, + p P e G,0 37 f p P e G, + p P e G,0 38 f p max{ P e G,, P e G,0 }, 39 since p < p. From Lemma, g P e G, + g P e G,0 D G R. Therefore, The Theorem follows. max{ P e G,, P e G, } D G R. 40 REFERENCES [] H. S. Witsenhausen, A counterexample in stochastic optimum control, SIAM Journal on Control, vol. 6, no., pp. 3 47, Jan [] Y.-C. Ho and T. Chang, Another look at the nonclassical information structure prolem, IEEE Transactions on Automatic Control, 980. [3] C. H. Papadimitriou and J. N. Tsitsiklis, Intractale prolems in control theory, SIAM Journal on Control and Optimization, vol. 4, no. 4, pp , 986. [4] R. Bansal and T. Basar, Stochastic teams with nonclassical information revisited: When is an affine control optimal? IEEE Transactions on Automatic Control, 987. [5] M. Baglietto, T. Parisini, and R. Zoppoli, Nonlinear approximations for the solution of team optimal control prolems, oceedings of the IEEE Conference on Decision and Control CDC, pp , 997. [6] J. T. Lee, E. Lau, and Y.-C. L. Ho, The witsenhausen counterexample: A hierarchical search approach for nonconvex optimization prolems, IEEE Transaction on Automatic Control, vol. 46, no. 3, 00. [7] M. Rotkowitz, Linear controllers are uniformly optimal for the witsenhausen counterexample, oceedings of the 45th IEEE Conference on Decision and Control CDC, pp , Dec [8] Y. C. Ho, M. P. Kastner, and E. Wong, Teams, signaling, and information theory, IEEE Trans. Automat. Contr., vol. 3, no., pp , Apr [9] S. K. Mitter and A. Sahai, Information and control: Witsenhausen revisited, in Learning, Control and Hyrid Systems: Lecture Notes in Control and Information Sciences 4, Y. Yamamoto and S. Hara, Eds. New York, NY: Springer, 999, pp [0] D. Tse and P. Viswanath, Fundamentals of Wireless Communication. New York: Camridge University ess, 005. [] S. Tatikonda, A. Sahai, and S. K. Mitter, Control of LQG systems under communication constraints, in oceedings of the 37th IEEE Conference on Decision and Control, Tampa, FL, Dec. 998, pp [] A. Sahai, S. Tatikonda, and S. K. Mitter, Control of LQG systems under communication constraints, in oceedings of the 999 American Control Conference, San Diego, CA, Jun. 999, pp [3] S. Tatikonda, Control under communication constraints, Ph.D. dissertation, Massachusetts Institute of Technology, Camridge, MA, 000. [4] A. Sahai, Any-time information theory, Ph.D. dissertation, Massachusetts Institute of Technology, Camridge, MA, 00. [5] N. Elia, When Bode meets Shannon: control-oriented feedack communication schemes, IEEE Trans. Automat. Contr., vol. 49, no. 9, pp , Sep [6] S. Tatikonda, A. Sahai, and S. K. Mitter, Stochastic linear control over a communication channel, IEEE Trans. Automat. Contr., vol. 49, no. 9, pp , Sep [7] A. Sahai, Evaluating channels for control: Capacity reconsidered, in oceedings of the 000 American Control Conference, Chicago, CA, Jun. 000, pp [8] A. Sahai and S. K. Mitter, The necessity and sufficiency of anytime capacity for stailization of a linear system over a noisy communication link. part I: scalar systems, IEEE Trans. Inform. Theory, vol. 5, no. 8, pp , Aug [9] N. C. Martins, Witsenhausen s counter example holds in the presence of side information, oceedings of the 45th IEEE Conference on Decision and Control CDC, pp. 6, 006. [0] M. Costa, Writing on dirty paper, IEEE Trans. Inform. Theory, vol. 9, no. 3, pp , May 983. [] A. Sahai and P. Grover, The price of certainty : waterslide curves and the gap to capacity, Sumitted to IEEE Transactions on Information Theory, availale online at Dec [] T. M. Cover and J. A. Thomas, Elements of Information Theory. New York: Wiley, 99. [3] R. G. Gallager, Information Theory and Reliale Communication. New York, NY: John Wiley, 97. [4] A. Sahai, Why lock-length and delay ehave differently if feedack is present, IEEE Trans. Inform. Theory, may 008. [Online]. Availale: sahai/papers/focusingbound.pdf [5] T. Richardson and R. Uranke, Modern Coding Theory. Camridge University ess, 007. [6] T. Weissman, E. Ordentlich, G. Seroussi, S. Verdu, and M. J. Weinerger, Inequalities for the l deviation of the empirical distriution, Tech. Rep., Jun [Online]. Availale:

Information Structures, the Witsenhausen Counterexample, and Communicating Using Actions

Information Structures, the Witsenhausen Counterexample, and Communicating Using Actions Information Structures, the Witsenhausen Counterexample, and Communicating Using Actions Pulkit Grover, Carnegie Mellon University Abstract The concept of information-structures in decentralized control

More information

Information Embedding meets Distributed Control

Information Embedding meets Distributed Control Information Embedding meets Distributed Control Pulkit Grover, Aaron B Wagner and Anant Sahai Abstract We consider the problem of information embedding where the encoder modifies a white Gaussian host

More information

Pulkit Grover and Anant Sahai*

Pulkit Grover and Anant Sahai* Int. J. Systems, Control and Communications, Vol. Witsenhausen s counterexample as Assisted Interference Suppression Pulkit Grover and Anant Sahai* Wireless Foundations, Department of EECS, University

More information

A Deterministic Annealing Approach to Witsenhausen s Counterexample

A Deterministic Annealing Approach to Witsenhausen s Counterexample A Deterministic Annealing Approach to Witsenhausen s Counterexample Mustafa Mehmetoglu Dep. of Electrical-Computer Eng. UC Santa Barbara, CA, US Email: mehmetoglu@ece.ucsb.edu Emrah Akyol Dep. of Electrical

More information

RECENT advances in technology have led to increased activity

RECENT advances in technology have led to increased activity IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 49, NO 9, SEPTEMBER 2004 1549 Stochastic Linear Control Over a Communication Channel Sekhar Tatikonda, Member, IEEE, Anant Sahai, Member, IEEE, and Sanjoy Mitter,

More information

The price of certainty: "waterslide curves" and the gap to capacity

The price of certainty: waterslide curves and the gap to capacity The price of certainty: "waterslide curves" and the to capacity Anant Sahai ulkit Grover Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No. UCB/EECS-008-

More information

Depth versus Breadth in Convolutional Polar Codes

Depth versus Breadth in Convolutional Polar Codes Depth versus Breadth in Convolutional Polar Codes Maxime Tremlay, Benjamin Bourassa and David Poulin,2 Département de physique & Institut quantique, Université de Sherrooke, Sherrooke, Quéec, Canada JK

More information

The price of certainty: waterslide curves and the gap to capacity

The price of certainty: waterslide curves and the gap to capacity The price of certainty: waterslide curves and the to capacity Anant Sahai and ulkit Grover Wireless Foundations, Department of EECS University of California at Berkeley, CA-9470, USA {sahai, pulkit}@eecs.erkeley.edu

More information

6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011

6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011 6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011 On the Structure of Real-Time Encoding and Decoding Functions in a Multiterminal Communication System Ashutosh Nayyar, Student

More information

A Sufficient Condition for Optimality of Digital versus Analog Relaying in a Sensor Network

A Sufficient Condition for Optimality of Digital versus Analog Relaying in a Sensor Network A Sufficient Condition for Optimality of Digital versus Analog Relaying in a Sensor Network Chandrashekhar Thejaswi PS Douglas Cochran and Junshan Zhang Department of Electrical Engineering Arizona State

More information

Lecture 5 Channel Coding over Continuous Channels

Lecture 5 Channel Coding over Continuous Channels Lecture 5 Channel Coding over Continuous Channels I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw November 14, 2014 1 / 34 I-Hsiang Wang NIT Lecture 5 From

More information

Witsenhausen s counterexample and its links with multimedia security problems

Witsenhausen s counterexample and its links with multimedia security problems Witsenhausen s counterexample and its links with multimedia security problems Pedro Comesaña-Alfaro Fernando Pérez-González Chaouki T. Abdallah IWDW 2011 Atlantic City, New Jersey Outline Introduction

More information

Binary Dirty MAC with Common State Information

Binary Dirty MAC with Common State Information Binary Dirty MAC with Common State Information Anatoly Khina Email: anatolyk@eng.tau.ac.il Tal Philosof Email: talp@eng.tau.ac.il Ram Zamir Email: zamir@eng.tau.ac.il Uri Erez Email: uri@eng.tau.ac.il

More information

Lecture 4 Noisy Channel Coding

Lecture 4 Noisy Channel Coding Lecture 4 Noisy Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 9, 2015 1 / 56 I-Hsiang Wang IT Lecture 4 The Channel Coding Problem

More information

18.2 Continuous Alphabet (discrete-time, memoryless) Channel

18.2 Continuous Alphabet (discrete-time, memoryless) Channel 0-704: Information Processing and Learning Spring 0 Lecture 8: Gaussian channel, Parallel channels and Rate-distortion theory Lecturer: Aarti Singh Scribe: Danai Koutra Disclaimer: These notes have not

More information

On the Duality between Multiple-Access Codes and Computation Codes

On the Duality between Multiple-Access Codes and Computation Codes On the Duality between Multiple-Access Codes and Computation Codes Jingge Zhu University of California, Berkeley jingge.zhu@berkeley.edu Sung Hoon Lim KIOST shlim@kiost.ac.kr Michael Gastpar EPFL michael.gastpar@epfl.ch

More information

JAIST Reposi. A Lower Bound Analysis of Hamming Di a Binary CEO Problem with Joint Sour Coding. Title

JAIST Reposi. A Lower Bound Analysis of Hamming Di a Binary CEO Problem with Joint Sour Coding. Title JAIST Reposi https://dspace.j Title A Lower Bound Analysis of Hamming Di a Binary CEO Prolem with Joint Sour Coding He, Xin; Zhou, Xiaoo; Komulainen, P Author(s) Markku; Matsumoto, Tad Citation IEEE Transactions

More information

Content Delivery in Erasure Broadcast. Channels with Cache and Feedback

Content Delivery in Erasure Broadcast. Channels with Cache and Feedback Content Delivery in Erasure Broadcast Channels with Cache and Feedack Asma Ghorel, Mari Koayashi, and Sheng Yang LSS, CentraleSupélec Gif-sur-Yvette, France arxiv:602.04630v [cs.t] 5 Fe 206 {asma.ghorel,

More information

Towards control over fading channels

Towards control over fading channels Towards control over fading channels Paolo Minero, Massimo Franceschetti Advanced Network Science University of California San Diego, CA, USA mail: {minero,massimo}@ucsd.edu Invited Paper) Subhrakanti

More information

Coding into a source: an inverse rate-distortion theorem

Coding into a source: an inverse rate-distortion theorem Coding into a source: an inverse rate-distortion theorem Anant Sahai joint work with: Mukul Agarwal Sanjoy K. Mitter Wireless Foundations Department of Electrical Engineering and Computer Sciences University

More information

Generalized Writing on Dirty Paper

Generalized Writing on Dirty Paper Generalized Writing on Dirty Paper Aaron S. Cohen acohen@mit.edu MIT, 36-689 77 Massachusetts Ave. Cambridge, MA 02139-4307 Amos Lapidoth lapidoth@isi.ee.ethz.ch ETF E107 ETH-Zentrum CH-8092 Zürich, Switzerland

More information

MMSE DECODING FOR ANALOG JOINT SOURCE CHANNEL CODING USING MONTE CARLO IMPORTANCE SAMPLING

MMSE DECODING FOR ANALOG JOINT SOURCE CHANNEL CODING USING MONTE CARLO IMPORTANCE SAMPLING MMSE DECODING FOR ANALOG JOINT SOURCE CHANNEL CODING USING MONTE CARLO IMPORTANCE SAMPLING Yichuan Hu (), Javier Garcia-Frias () () Dept. of Elec. and Comp. Engineering University of Delaware Newark, DE

More information

On Common Information and the Encoding of Sources that are Not Successively Refinable

On Common Information and the Encoding of Sources that are Not Successively Refinable On Common Information and the Encoding of Sources that are Not Successively Refinable Kumar Viswanatha, Emrah Akyol, Tejaswi Nanjundaswamy and Kenneth Rose ECE Department, University of California - Santa

More information

Stabilization over discrete memoryless and wideband channels using nearly memoryless observations

Stabilization over discrete memoryless and wideband channels using nearly memoryless observations Stabilization over discrete memoryless and wideband channels using nearly memoryless observations Anant Sahai Abstract We study stabilization of a discrete-time scalar unstable plant over a noisy communication

More information

On the Expected Rate of Slowly Fading Channels with Quantized Side Information

On the Expected Rate of Slowly Fading Channels with Quantized Side Information On the Expected Rate of Slowly Fading Channels with Quantized Side Information Thanh Tùng Kim and Mikael Skoglund 2005-10-30 IR S3 KT 0515 c 2005 IEEE. Personal use of this material is permitted. However,

More information

Lecture 20: Quantization and Rate-Distortion

Lecture 20: Quantization and Rate-Distortion Lecture 20: Quantization and Rate-Distortion Quantization Introduction to rate-distortion theorem Dr. Yao Xie, ECE587, Information Theory, Duke University Approimating continuous signals... Dr. Yao Xie,

More information

Lecture 4 Channel Coding

Lecture 4 Channel Coding Capacity and the Weak Converse Lecture 4 Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 15, 2014 1 / 16 I-Hsiang Wang NIT Lecture 4 Capacity

More information

Communication, Computation, and Control: Through the Information Lens

Communication, Computation, and Control: Through the Information Lens Communication, Computation, and Control: Through the Information Lens Pulkit Grover Stanford University CMU, April 16, 2012. Acknowledgments: C2S2/Interconnect Focus Center (IFC) NSF Center for Science

More information

Reliable Computation over Multiple-Access Channels

Reliable Computation over Multiple-Access Channels Reliable Computation over Multiple-Access Channels Bobak Nazer and Michael Gastpar Dept. of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley, CA, 94720-1770 {bobak,

More information

Time-division multiplexing for green broadcasting

Time-division multiplexing for green broadcasting Time-division multiplexing for green broadcasting Pulkit Grover and Anant Sahai Wireless Foundations, Department of EECS University of California at Berkeley Email: {pulkit, sahai} @ eecs.berkeley.edu

More information

Error Exponent Region for Gaussian Broadcast Channels

Error Exponent Region for Gaussian Broadcast Channels Error Exponent Region for Gaussian Broadcast Channels Lihua Weng, S. Sandeep Pradhan, and Achilleas Anastasopoulos Electrical Engineering and Computer Science Dept. University of Michigan, Ann Arbor, MI

More information

A New Achievable Region for Gaussian Multiple Descriptions Based on Subset Typicality

A New Achievable Region for Gaussian Multiple Descriptions Based on Subset Typicality 0 IEEE Information Theory Workshop A New Achievable Region for Gaussian Multiple Descriptions Based on Subset Typicality Kumar Viswanatha, Emrah Akyol and Kenneth Rose ECE Department, University of California

More information

Encoder Decoder Design for Feedback Control over the Binary Symmetric Channel

Encoder Decoder Design for Feedback Control over the Binary Symmetric Channel Encoder Decoder Design for Feedback Control over the Binary Symmetric Channel Lei Bao, Mikael Skoglund and Karl Henrik Johansson School of Electrical Engineering, Royal Institute of Technology, Stockholm,

More information

Efficient Compression of Monotone and m-modal Distributions

Efficient Compression of Monotone and m-modal Distributions Efficient Compression of Monotone and m-modal Distriutions Jayadev Acharya, Ashkan Jafarpour, Alon Orlitsky, and Ananda Theertha Suresh ECE Department, UCSD {jacharya, ashkan, alon, asuresh}@ucsd.edu Astract

More information

SOURCE CODING WITH SIDE INFORMATION AT THE DECODER (WYNER-ZIV CODING) FEB 13, 2003

SOURCE CODING WITH SIDE INFORMATION AT THE DECODER (WYNER-ZIV CODING) FEB 13, 2003 SOURCE CODING WITH SIDE INFORMATION AT THE DECODER (WYNER-ZIV CODING) FEB 13, 2003 SLEPIAN-WOLF RESULT { X i} RATE R x ENCODER 1 DECODER X i V i {, } { V i} ENCODER 0 RATE R v Problem: Determine R, the

More information

On the Optimal Performance in Asymmetric Gaussian Wireless Sensor Networks With Fading

On the Optimal Performance in Asymmetric Gaussian Wireless Sensor Networks With Fading 436 IEEE TRANSACTIONS ON SIGNA ROCESSING, O. 58, NO. 4, ARI 00 [9] B. Chen and. K. Willett, On the optimality of the likelihood-ratio test for local sensor decision rules in the presence of nonideal channels,

More information

Example: Bipolar NRZ (non-return-to-zero) signaling

Example: Bipolar NRZ (non-return-to-zero) signaling Baseand Data Transmission Data are sent without using a carrier signal Example: Bipolar NRZ (non-return-to-zero signaling is represented y is represented y T A -A T : it duration is represented y BT. Passand

More information

Data Rate Theorem for Stabilization over Time-Varying Feedback Channels

Data Rate Theorem for Stabilization over Time-Varying Feedback Channels Data Rate Theorem for Stabilization over Time-Varying Feedback Channels Workshop on Frontiers in Distributed Communication, Sensing and Control Massimo Franceschetti, UCSD (joint work with P. Minero, S.

More information

Lattices for Distributed Source Coding: Jointly Gaussian Sources and Reconstruction of a Linear Function

Lattices for Distributed Source Coding: Jointly Gaussian Sources and Reconstruction of a Linear Function Lattices for Distributed Source Coding: Jointly Gaussian Sources and Reconstruction of a Linear Function Dinesh Krithivasan and S. Sandeep Pradhan Department of Electrical Engineering and Computer Science,

More information

The Poisson Channel with Side Information

The Poisson Channel with Side Information The Poisson Channel with Side Information Shraga Bross School of Enginerring Bar-Ilan University, Israel brosss@macs.biu.ac.il Amos Lapidoth Ligong Wang Signal and Information Processing Laboratory ETH

More information

Structuring Unreliable Radio Networks

Structuring Unreliable Radio Networks Structuring Unreliale Radio Networks Keren Censor-Hillel Seth Gilert Faian Kuhn Nancy Lynch Calvin Newport March 29, 2011 Astract In this paper we study the prolem of uilding a connected dominating set

More information

Universal Anytime Codes: An approach to uncertain channels in control

Universal Anytime Codes: An approach to uncertain channels in control Universal Anytime Codes: An approach to uncertain channels in control paper by Stark Draper and Anant Sahai presented by Sekhar Tatikonda Wireless Foundations Department of Electrical Engineering and Computer

More information

Arimoto Channel Coding Converse and Rényi Divergence

Arimoto Channel Coding Converse and Rényi Divergence Arimoto Channel Coding Converse and Rényi Divergence Yury Polyanskiy and Sergio Verdú Abstract Arimoto proved a non-asymptotic upper bound on the probability of successful decoding achievable by any code

More information

Lecture 6 I. CHANNEL CODING. X n (m) P Y X

Lecture 6 I. CHANNEL CODING. X n (m) P Y X 6- Introduction to Information Theory Lecture 6 Lecturer: Haim Permuter Scribe: Yoav Eisenberg and Yakov Miron I. CHANNEL CODING We consider the following channel coding problem: m = {,2,..,2 nr} Encoder

More information

Notes 3: Stochastic channels and noisy coding theorem bound. 1 Model of information communication and noisy channel

Notes 3: Stochastic channels and noisy coding theorem bound. 1 Model of information communication and noisy channel Introduction to Coding Theory CMU: Spring 2010 Notes 3: Stochastic channels and noisy coding theorem bound January 2010 Lecturer: Venkatesan Guruswami Scribe: Venkatesan Guruswami We now turn to the basic

More information

Representation of Correlated Sources into Graphs for Transmission over Broadcast Channels

Representation of Correlated Sources into Graphs for Transmission over Broadcast Channels Representation of Correlated s into Graphs for Transmission over Broadcast s Suhan Choi Department of Electrical Eng. and Computer Science University of Michigan, Ann Arbor, MI 80, USA Email: suhanc@eecs.umich.edu

More information

Performance Bounds for Bi-Directional Coded Cooperation Protocols

Performance Bounds for Bi-Directional Coded Cooperation Protocols 1 Performance Bounds for Bi-Directional Coded Cooperation Protocols Sang Joon Kim, Patrick Mitran, and Vahid Tarokh arxiv:cs/0703017v4 [cs.it] 5 Jun 2008 Astract In coded i-directional cooperation, two

More information

The Capacity Region of 2-Receiver Multiple-Input Broadcast Packet Erasure Channels with Channel Output Feedback

The Capacity Region of 2-Receiver Multiple-Input Broadcast Packet Erasure Channels with Channel Output Feedback IEEE TRANSACTIONS ON INFORMATION THEORY, ONLINE PREPRINT 2014 1 The Capacity Region of 2-Receiver Multiple-Input Broadcast Packet Erasure Channels with Channel Output Feedack Chih-Chun Wang, Memer, IEEE,

More information

Lecture 6 January 15, 2014

Lecture 6 January 15, 2014 Advanced Graph Algorithms Jan-Apr 2014 Lecture 6 January 15, 2014 Lecturer: Saket Sourah Scrie: Prafullkumar P Tale 1 Overview In the last lecture we defined simple tree decomposition and stated that for

More information

Capacity-achieving Feedback Scheme for Flat Fading Channels with Channel State Information

Capacity-achieving Feedback Scheme for Flat Fading Channels with Channel State Information Capacity-achieving Feedback Scheme for Flat Fading Channels with Channel State Information Jialing Liu liujl@iastate.edu Sekhar Tatikonda sekhar.tatikonda@yale.edu Nicola Elia nelia@iastate.edu Dept. of

More information

Upper Bounds on the Capacity of Binary Intermittent Communication

Upper Bounds on the Capacity of Binary Intermittent Communication Upper Bounds on the Capacity of Binary Intermittent Communication Mostafa Khoshnevisan and J. Nicholas Laneman Department of Electrical Engineering University of Notre Dame Notre Dame, Indiana 46556 Email:{mhoshne,

More information

Multiuser Successive Refinement and Multiple Description Coding

Multiuser Successive Refinement and Multiple Description Coding Multiuser Successive Refinement and Multiple Description Coding Chao Tian Laboratory for Information and Communication Systems (LICOS) School of Computer and Communication Sciences EPFL Lausanne Switzerland

More information

(Classical) Information Theory III: Noisy channel coding

(Classical) Information Theory III: Noisy channel coding (Classical) Information Theory III: Noisy channel coding Sibasish Ghosh The Institute of Mathematical Sciences CIT Campus, Taramani, Chennai 600 113, India. p. 1 Abstract What is the best possible way

More information

Optimal Decentralized Control of Coupled Subsystems With Control Sharing

Optimal Decentralized Control of Coupled Subsystems With Control Sharing IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 58, NO. 9, SEPTEMBER 2013 2377 Optimal Decentralized Control of Coupled Subsystems With Control Sharing Aditya Mahajan, Member, IEEE Abstract Subsystems that

More information

Power Control for Discrete-Input Block-Fading Channels. K. D. Nguyen, A. Guillén i Fàbregas and L. K. Rasmussen

Power Control for Discrete-Input Block-Fading Channels. K. D. Nguyen, A. Guillén i Fàbregas and L. K. Rasmussen Power Control for Discrete-Input Block-Fading Channels K. D. Nguyen, A. Guillén i Fàregas and L. K. Rasmussen CUED / F-INFENG / TR 582 August 2007 Power Control for Discrete-Input Block-Fading Channels

More information

Gaussian Estimation under Attack Uncertainty

Gaussian Estimation under Attack Uncertainty Gaussian Estimation under Attack Uncertainty Tara Javidi Yonatan Kaspi Himanshu Tyagi Abstract We consider the estimation of a standard Gaussian random variable under an observation attack where an adversary

More information

Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels

Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels Nan Liu and Andrea Goldsmith Department of Electrical Engineering Stanford University, Stanford CA 94305 Email:

More information

Control Capacity. Gireeja Ranade and Anant Sahai UC Berkeley EECS

Control Capacity. Gireeja Ranade and Anant Sahai UC Berkeley EECS Control Capacity Gireeja Ranade and Anant Sahai UC Berkeley EECS gireeja@eecs.berkeley.edu, sahai@eecs.berkeley.edu Abstract This paper presents a notion of control capacity that gives a fundamental limit

More information

On the Capacity Region of the Gaussian Z-channel

On the Capacity Region of the Gaussian Z-channel On the Capacity Region of the Gaussian Z-channel Nan Liu Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College Park, MD 74 nkancy@eng.umd.edu ulukus@eng.umd.edu

More information

Multiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets

Multiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets Multiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets Shivaprasad Kotagiri and J. Nicholas Laneman Department of Electrical Engineering University of Notre Dame Notre Dame,

More information

Digital Communications III (ECE 154C) Introduction to Coding and Information Theory

Digital Communications III (ECE 154C) Introduction to Coding and Information Theory Digital Communications III (ECE 154C) Introduction to Coding and Information Theory Tara Javidi These lecture notes were originally developed by late Prof. J. K. Wolf. UC San Diego Spring 2014 1 / 8 I

More information

Universality of Logarithmic Loss in Lossy Compression

Universality of Logarithmic Loss in Lossy Compression Universality of Logarithmic Loss in Lossy Compression Albert No, Member, IEEE, and Tsachy Weissman, Fellow, IEEE arxiv:709.0034v [cs.it] Sep 207 Abstract We establish two strong senses of universality

More information

Exercises with solutions (Set D)

Exercises with solutions (Set D) Exercises with solutions Set D. A fair die is rolled at the same time as a fair coin is tossed. Let A be the number on the upper surface of the die and let B describe the outcome of the coin toss, where

More information

WE study the capacity of peak-power limited, single-antenna,

WE study the capacity of peak-power limited, single-antenna, 1158 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 3, MARCH 2010 Gaussian Fading Is the Worst Fading Tobias Koch, Member, IEEE, and Amos Lapidoth, Fellow, IEEE Abstract The capacity of peak-power

More information

EVALUATIONS OF EXPECTED GENERALIZED ORDER STATISTICS IN VARIOUS SCALE UNITS

EVALUATIONS OF EXPECTED GENERALIZED ORDER STATISTICS IN VARIOUS SCALE UNITS APPLICATIONES MATHEMATICAE 9,3 (), pp. 85 95 Erhard Cramer (Oldenurg) Udo Kamps (Oldenurg) Tomasz Rychlik (Toruń) EVALUATIONS OF EXPECTED GENERALIZED ORDER STATISTICS IN VARIOUS SCALE UNITS Astract. We

More information

Chaos and Dynamical Systems

Chaos and Dynamical Systems Chaos and Dynamical Systems y Megan Richards Astract: In this paper, we will discuss the notion of chaos. We will start y introducing certain mathematical concepts needed in the understanding of chaos,

More information

Transmitting k samples over the Gaussian channel: energy-distortion tradeoff

Transmitting k samples over the Gaussian channel: energy-distortion tradeoff Transmitting samples over the Gaussian channel: energy-distortion tradeoff Victoria Kostina Yury Polyansiy Sergio Verdú California Institute of Technology Massachusetts Institute of Technology Princeton

More information

Lecture 6 Channel Coding over Continuous Channels

Lecture 6 Channel Coding over Continuous Channels Lecture 6 Channel Coding over Continuous Channels I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw November 9, 015 1 / 59 I-Hsiang Wang IT Lecture 6 We have

More information

On the Capacity of the Two-Hop Half-Duplex Relay Channel

On the Capacity of the Two-Hop Half-Duplex Relay Channel On the Capacity of the Two-Hop Half-Duplex Relay Channel Nikola Zlatanov, Vahid Jamali, and Robert Schober University of British Columbia, Vancouver, Canada, and Friedrich-Alexander-University Erlangen-Nürnberg,

More information

Encoder Decoder Design for Event-Triggered Feedback Control over Bandlimited Channels

Encoder Decoder Design for Event-Triggered Feedback Control over Bandlimited Channels Encoder Decoder Design for Event-Triggered Feedback Control over Bandlimited Channels LEI BAO, MIKAEL SKOGLUND AND KARL HENRIK JOHANSSON IR-EE- 26: Stockholm 26 Signal Processing School of Electrical Engineering

More information

Communication constraints and latency in Networked Control Systems

Communication constraints and latency in Networked Control Systems Communication constraints and latency in Networked Control Systems João P. Hespanha Center for Control Engineering and Computation University of California Santa Barbara In collaboration with Antonio Ortega

More information

MMSE Dimension. snr. 1 We use the following asymptotic notation: f(x) = O (g(x)) if and only

MMSE Dimension. snr. 1 We use the following asymptotic notation: f(x) = O (g(x)) if and only MMSE Dimension Yihong Wu Department of Electrical Engineering Princeton University Princeton, NJ 08544, USA Email: yihongwu@princeton.edu Sergio Verdú Department of Electrical Engineering Princeton University

More information

Encoder Decoder Design for Event-Triggered Feedback Control over Bandlimited Channels

Encoder Decoder Design for Event-Triggered Feedback Control over Bandlimited Channels Encoder Decoder Design for Event-Triggered Feedback Control over Bandlimited Channels Lei Bao, Mikael Skoglund and Karl Henrik Johansson Department of Signals, Sensors and Systems, Royal Institute of Technology,

More information

E Demystifying the Witsenhausen Counterexample

E Demystifying the Witsenhausen Counterexample » ASK THE EXPERTS E Demystifying the Witsenhausen Counterexample Q. In my graduate-level controls course, the instructor mentioned the Witsenhausen counterexample, which seems to have something to do with

More information

Secure Degrees of Freedom of the MIMO Multiple Access Wiretap Channel

Secure Degrees of Freedom of the MIMO Multiple Access Wiretap Channel Secure Degrees of Freedom of the MIMO Multiple Access Wiretap Channel Pritam Mukherjee Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College Park, MD 074 pritamm@umd.edu

More information

Chapter 9 Fundamental Limits in Information Theory

Chapter 9 Fundamental Limits in Information Theory Chapter 9 Fundamental Limits in Information Theory Information Theory is the fundamental theory behind information manipulation, including data compression and data transmission. 9.1 Introduction o For

More information

1 Hoeffding s Inequality

1 Hoeffding s Inequality Proailistic Method: Hoeffding s Inequality and Differential Privacy Lecturer: Huert Chan Date: 27 May 22 Hoeffding s Inequality. Approximate Counting y Random Sampling Suppose there is a ag containing

More information

Lecture 5: Channel Capacity. Copyright G. Caire (Sample Lectures) 122

Lecture 5: Channel Capacity. Copyright G. Caire (Sample Lectures) 122 Lecture 5: Channel Capacity Copyright G. Caire (Sample Lectures) 122 M Definitions and Problem Setup 2 X n Y n Encoder p(y x) Decoder ˆM Message Channel Estimate Definition 11. Discrete Memoryless Channel

More information

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University Chapter 4 Data Transmission and Channel Capacity Po-Ning Chen, Professor Department of Communications Engineering National Chiao Tung University Hsin Chu, Taiwan 30050, R.O.C. Principle of Data Transmission

More information

A Simple Encoding And Decoding Strategy For Stabilization Over Discrete Memoryless Channels

A Simple Encoding And Decoding Strategy For Stabilization Over Discrete Memoryless Channels A Simple Encoding And Decoding Strategy For Stabilization Over Discrete Memoryless Channels Anant Sahai and Hari Palaiyanur Dept. of Electrical Engineering and Computer Sciences University of California,

More information

IN this paper, we show that the scalar Gaussian multiple-access

IN this paper, we show that the scalar Gaussian multiple-access 768 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 50, NO. 5, MAY 2004 On the Duality of Gaussian Multiple-Access and Broadcast Channels Nihar Jindal, Student Member, IEEE, Sriram Vishwanath, and Andrea

More information

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 4, APRIL

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 4, APRIL IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 4, APRIL 2006 1545 Bounds on Capacity Minimum EnergyPerBit for AWGN Relay Channels Abbas El Gamal, Fellow, IEEE, Mehdi Mohseni, Student Member, IEEE,

More information

ECE Information theory Final (Fall 2008)

ECE Information theory Final (Fall 2008) ECE 776 - Information theory Final (Fall 2008) Q.1. (1 point) Consider the following bursty transmission scheme for a Gaussian channel with noise power N and average power constraint P (i.e., 1/n X n i=1

More information

The Capacity Region of the Gaussian Cognitive Radio Channels at High SNR

The Capacity Region of the Gaussian Cognitive Radio Channels at High SNR The Capacity Region of the Gaussian Cognitive Radio Channels at High SNR 1 Stefano Rini, Daniela Tuninetti and Natasha Devroye srini2, danielat, devroye @ece.uic.edu University of Illinois at Chicago Abstract

More information

Energy Efficient Estimation of Gaussian Sources Over Inhomogeneous Gaussian MAC Channels

Energy Efficient Estimation of Gaussian Sources Over Inhomogeneous Gaussian MAC Channels Energy Efficient Estimation of Gaussian Sources Over Inhomogeneous Gaussian MAC Channels Shuangqing Wei, Ragopal Kannan, Sitharama Iyengar and Nageswara S. Rao Abstract In this paper, we first provide

More information

On Compound Channels With Side Information at the Transmitter

On Compound Channels With Side Information at the Transmitter IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 52, NO 4, APRIL 2006 1745 On Compound Channels With Side Information at the Transmitter Patrick Mitran, Student Member, IEEE, Natasha Devroye, Student Member,

More information

On Scalable Source Coding for Multiple Decoders with Side Information

On Scalable Source Coding for Multiple Decoders with Side Information On Scalable Source Coding for Multiple Decoders with Side Information Chao Tian School of Computer and Communication Sciences Laboratory for Information and Communication Systems (LICOS), EPFL, Lausanne,

More information

Midterm Exam Information Theory Fall Midterm Exam. Time: 09:10 12:10 11/23, 2016

Midterm Exam Information Theory Fall Midterm Exam. Time: 09:10 12:10 11/23, 2016 Midterm Exam Time: 09:10 12:10 11/23, 2016 Name: Student ID: Policy: (Read before You Start to Work) The exam is closed book. However, you are allowed to bring TWO A4-size cheat sheet (single-sheet, two-sided).

More information

On Scalable Coding in the Presence of Decoder Side Information

On Scalable Coding in the Presence of Decoder Side Information On Scalable Coding in the Presence of Decoder Side Information Emrah Akyol, Urbashi Mitra Dep. of Electrical Eng. USC, CA, US Email: {eakyol, ubli}@usc.edu Ertem Tuncel Dep. of Electrical Eng. UC Riverside,

More information

Information Dimension

Information Dimension Information Dimension Mina Karzand Massachusetts Institute of Technology November 16, 2011 1 / 26 2 / 26 Let X would be a real-valued random variable. For m N, the m point uniform quantized version of

More information

Cooperative Communication with Feedback via Stochastic Approximation

Cooperative Communication with Feedback via Stochastic Approximation Cooperative Communication with Feedback via Stochastic Approximation Utsaw Kumar J Nicholas Laneman and Vijay Gupta Department of Electrical Engineering University of Notre Dame Email: {ukumar jnl vgupta}@ndedu

More information

Distributed Functional Compression through Graph Coloring

Distributed Functional Compression through Graph Coloring Distributed Functional Compression through Graph Coloring Vishal Doshi, Devavrat Shah, Muriel Médard, and Sidharth Jaggi Laboratory for Information and Decision Systems Massachusetts Institute of Technology

More information

ProblemsWeCanSolveWithaHelper

ProblemsWeCanSolveWithaHelper ITW 2009, Volos, Greece, June 10-12, 2009 ProblemsWeCanSolveWitha Haim Permuter Ben-Gurion University of the Negev haimp@bgu.ac.il Yossef Steinberg Technion - IIT ysteinbe@ee.technion.ac.il Tsachy Weissman

More information

The Gallager Converse

The Gallager Converse The Gallager Converse Abbas El Gamal Director, Information Systems Laboratory Department of Electrical Engineering Stanford University Gallager s 75th Birthday 1 Information Theoretic Limits Establishing

More information

Appendix B Information theory from first principles

Appendix B Information theory from first principles Appendix B Information theory from first principles This appendix discusses the information theory behind the capacity expressions used in the book. Section 8.3.4 is the only part of the book that supposes

More information

Lecture 2. Capacity of the Gaussian channel

Lecture 2. Capacity of the Gaussian channel Spring, 207 5237S, Wireless Communications II 2. Lecture 2 Capacity of the Gaussian channel Review on basic concepts in inf. theory ( Cover&Thomas: Elements of Inf. Theory, Tse&Viswanath: Appendix B) AWGN

More information

ASIGNIFICANT research effort has been devoted to the. Optimal State Estimation for Stochastic Systems: An Information Theoretic Approach

ASIGNIFICANT research effort has been devoted to the. Optimal State Estimation for Stochastic Systems: An Information Theoretic Approach IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 42, NO 6, JUNE 1997 771 Optimal State Estimation for Stochastic Systems: An Information Theoretic Approach Xiangbo Feng, Kenneth A Loparo, Senior Member, IEEE,

More information

Essential Maths 1. Macquarie University MAFC_Essential_Maths Page 1 of These notes were prepared by Anne Cooper and Catriona March.

Essential Maths 1. Macquarie University MAFC_Essential_Maths Page 1 of These notes were prepared by Anne Cooper and Catriona March. Essential Maths 1 The information in this document is the minimum assumed knowledge for students undertaking the Macquarie University Masters of Applied Finance, Graduate Diploma of Applied Finance, and

More information

Learning Approaches to the Witsenhausen Counterexample From a View of Potential Games

Learning Approaches to the Witsenhausen Counterexample From a View of Potential Games Learning Approaches to the Witsenhausen Counterexample From a View of Potential Games Na Li, Jason R. Marden and Jeff S. Shamma Abstract Since Witsenhausen put forward his remarkable counterexample in

More information

Optimality of Walrand-Varaiya Type Policies and. Approximation Results for Zero-Delay Coding of. Markov Sources. Richard G. Wood

Optimality of Walrand-Varaiya Type Policies and. Approximation Results for Zero-Delay Coding of. Markov Sources. Richard G. Wood Optimality of Walrand-Varaiya Type Policies and Approximation Results for Zero-Delay Coding of Markov Sources by Richard G. Wood A thesis submitted to the Department of Mathematics & Statistics in conformity

More information