The error exponent with delay for lossless source coding

Size: px
Start display at page:

Download "The error exponent with delay for lossless source coding"

Transcription

1 The error eponent with delay for lossless source coding Cheng Chang and Anant Sahai Wireless Foundations, University of California at Berkeley Abstract In channel coding, reliable communication takes place at rates below capacity at the fundamental cost of end-toend delay. Error eponents tell us how much faster convergence is when we settle for less rate. For lossless source coding, entropy takes the place of capacity and error eponents tell us how much faster convergence is when we use more rate. While in channel coding without feedback the block error eponent is a good proy for studying the more fundamental tradeoff with fied end-to-end delay, it is not so in source coding. Block-coding error eponents are quite conservative (despite being tight!) when it comes to the tradeoff with delay. onblock codes can achieve much better performance with fied delay and we present both the fundamental bound and how to achieve it in a delay-universal manner. The proof gives substance to Shannon s cryptic statement about how the duality between source and channel coding is like the duality between the past and the future. I. ITODUCTIO AD POBLEM SETUP Shannon closed his seminal paper on lossy source coding [] with an intriguing comment: [The duality between source and channel coding] can be pursued further and is related to a duality between past and future and the notions of control and knowledge. Thus we may have knowledge of the past and cannot control it; we may control the future but have no knowledge of it. While there has been much work eploring the relationship between source and channel coding, no connection to the past and the future has emerged so far. In this paper, we demonstrate such a connection by looking at the fundamental error eponent with respect to fied delay between the time a message is first made known to the encoder and its deadline at the decoder. For channel coding without feedback, the dominant error event is caused by the channel s future behavior in that errors can be forced by mild channel misbehavior during the period between the message arrival time and its deadline [2], [3]. This sort of mild misbehavior is eactly what is bounded by the traditional sphere-packing arguments for block coding. When feedback is allowed, the encoder can do flow-control and become robust to such mild misbehavior even when facing fied-delay deadlines [4], [3]. The fundamental limits are instead given by the focusing bound [3]. This paper develops a lossless source-coding counterpart to the focusing bound. This reveals that the dominant source of errors is the past misbehavior of the source, over which the encoder has no control. Since the communication medium is a noiseless fied-rate bit-pipe, the future is entirely under the control of the encoder. This bound is also asymptotically achievable using fied-to-variable codes whose variable-rate nature is smoothed out by using a queue. A. eview of block source coding results The discrete memoryless source S generates iid random variables i from a finite alphabet according to distribution p. Without loss of generality, assume p () > 0,. A rate block source coding system for n source symbols consists of a encoder-decoder pair (E n, D n) where E n : n {0, } n, D n : {0, } n n, E n( n ) = b n D n(b n ) = b n The probability of block decoding error is P ( n b n ) = P ( n D n(e n( n ))). Shannon proved that arbitrarily small error probabilities are achievable by letting n get big as long as the encoder rate is larger than the entropy of the source, > H(p ). Furthermore, it turns out that the error probability goes to zero eponentially in n. Lemma : (From [5]) For a discrete memoryless source p and encoder rate < log 2, ɛ > 0, K(ɛ) <, n 0, a block encoder-decoder pair E n, D n satisfying P ( n b n ) K(ɛ)2 n(e b() ɛ) This result is asymptotically tight, in the sense that for any sequence of encoder-decoder pairs E n, D n, () lim sup n n log 2 P ( n b n ) E b () (2) where E b () is defined as the block source coding reliability function with the form: E b () = min D(q p ) (3) q:h(q) Paralleling the definition of the Gallager function for channel coding [6], define E 0(ρ) = ( + ρ) log 2 [ Then, as mentioned as an eercise in [5]: p () +ρ ] (4) E b () = sup{ρ E 0(ρ)} (5) ρ0 = D(p ρ p ) where p ρ is the tilted distribution of parameter ρ satisfying H(p ρ ) =. The tilted distribution of parameter ρ (, ) of a distribution p is: p ρ () = p() +ρ Ps p(s) +ρ (6)

2 Source Encoding Decoding Time line b ( ) b 2 ( ) b 3 ( 2 ) b 4 ( 2 ) b 5 ( 3 ) b 6 ( 3 ) b 7 ( 4 ) b 8 ( 4 ) ˆ () ˆ 2 (2) ˆ 3 (3) ˆ 4 (4) Fig.. Delay-universal sequential source coding at = 2 H(p ρ ) ρ p 0 = p and as shown in [7], 0 and is generally strictly positive unless p is uniform on. Thus the optimal ρ corresponding to is unique unless p is uniform over which is a uninteresting case. B. Sequential Source Coding ather than being known in advance, the source symbols enter the encoder in a real-time fashion. We assume that the source S generates one source symbol per second from a finite alphabet. The j th source symbol j is not known at the encoder until time j. ate operation means that the encoder sends binary bit to the decoder every seconds. For obvious reasons, we focus on cases with H(p ) < < log 2. Definition : A sequential encoder-decoder pair E, D are sequence of maps. {E j}, j =, 2,... and {D j}, j =, 2,... The outputs of E j are the outputs of the encoder E from time j to j. E j : j {0, } j (j ) E j( j ) = b j (j ) + The outputs of D j are the decoding decisions of all the arrived source symbols by time j based on the received binary bits up to time j. D j : {0, } j D j(b j ) = b j d Where b j d (j) is the estimation, at time j, of j d and thus has end-to-end delay of d seconds. In a delay-universal scheme, the decoder emits revised estimates for all source-symbols so far. A rate = 2 delay-universal sequential source coding system is illustrated in Figure. For sequential source coding, it is important to study the symbol by symbol decoding error probability instead of the block coding error probability. Definition 2: A family of rate sequential source codes {(E d, D d )} are said to achieve delay-reliability E s() if and only if: i lim inf d d log 2 P ( i b i(i + d)) E s() II. UPPE BOUD O E s() To bound the best possible error eponent with fied delay, we consider a genie-aided encoder/decoder pair and translate the block-coding bounds of [5] to the fied delay contet. The arguments are analogous to the focusing bound derivation in [3] for the case of channel coding with feedback. Theorem : For fied-rate encodings of discrete memoryless sources, it is not possible to achieve an error eponent with fied-delay better than Es () = inf > E b() (7) Proof: For simplicity of eposition, we ignore integer effects arising from the finite nature of d,, etc. For every > and delay d, consider a code running over its fied-rate d noiseless channel till time. By this time, the decoder will have committed to estimates for the source symbols up to time i = d. The total number of bits used during this period is d. ow consider a genie that gives the encoder access to the first i source symbols at the beginning of time, rather than forcing the encoder to get the source symbols one at a time. Simultaneously, loosen the requirements on the decoder by only demanding correct estimates for the first i source symbols by the time d. In effect, the deadline for decoding the past source symbols is etended to the deadline of the i-th symbol itself. Any lower-bound to the error probability of the new problem is clearly also a bound for the original problem. Furthermore, the new problem is just a fied-length block-coding problem requiring the encoding of i source symbols into The rate per symbol is ( d) i = ( d) d = d bits. Theorem 2.5 in [5] tells us that such a code has a probability of error that is at least eponential in ie b (). Since i = d, this translates into an error eponent of at most E b () with parameter d. Since this is true for all >, we have a bound on the reliability function E s() with fied delay d: E s() inf > E b() The right hand side is defined to be Es (). The minimizing d tells how much of the past ( ) is involved in the dominant error event. This bound can be rewritten in terms of the ρ parameter to get a form paralleling the symmetric channel case from [3]. Corollary : where satisfies = E 0( ). Proof: For convenience, define E s() E 0( ) (8) G (ρ) = ρ E 0(ρ) (9) and notice that G ( ) = 0. As shown in [7]: dg (ρ) d 2 G (ρ) = H(p ρ ), 0 dρ dρ 2 G (0) = 0, G ( ) < 0 (0) 2

3 source symbol per second Block Encoder of length Fig. 2. G (ρ) Encoder Buffer (FIFO) Sequential Source Encoder bit per / seconds Decoder G (ρ) is a concave function as illustrated in Figure 2. Consider ρ 0 for which H(p ρ ) = 0. By concavity, we know that ρ <. Write: F (, ρ) = (ρ E0(ρ)) = ρ + G(ρ) Then for any (, log 2 ), F (,ρ) = H(pρ ) 2 F (, ρ), 0 ρ ρ 2 F (, 0) = 0, F (, ) < 0 () So for any (, log 2 ) let ρ() be so F (, ρ()) is maimized. ρ() is thus the unique solution to: Define = H(p ) = H(p ) = H(p ) H(p ρ() ) = 0 which satisfies H(p ) H(pρ ) = log 2 = Since H(p ) = 0, maimizes F (, ρ) over all ρ. Using (5) and the above analysis: E s() inf > = inf sup > ρ>0 = inf sup > ρ>0 E b() (ρ E0(ρ)) F (, ρ) sup F (, ρ) ρ>0 = F (, ) = (2) This proves the desired upper bound. III. ACHIEVABLE EPOETS To lower-bound the error eponent with delay, we give an eplicit fied-rate coding scheme that is a minor variation of the scheme analyzed in [8]. In [3], an achievable scheme was given for channel coding with feedback when there was an additional low-rate noise-free link that could carry flow-control information. In source-coding, there is just a noise-free link Fig. 3. Sequential source coding using a variable-length code and the flow control information is made implicit by using a prefi-free code. Just as in [3], the queuing behavior is what will determine the achieved delay error eponent. ather than repeating the from first principles derivation of [3], here we give an alternate derivation by applying Cramáer s theorem. A. A sequential variable length source coding scheme We are interested in the performance with asymptotically large delays d. A block-length is chosen that is much smaller than the target end-to-end delays, while still being large enough. For a discrete memoryless source and large blocklengths, the best possible variable-length code is given in [5] and consists of two stages: first describing the type of the block using O( log ) bits and then describing which particular realization has occurred by using a variable H( ) bits. The overhead O( log ) is asymptotically negligible and the code is also universal in nature. While the above code will also work, for simplicity of analysis, we instead consider a Shannon-code built for a particular tilted distribution for. Definition 3: The instantaneous code C,λ for λ > is a mapping from to a variable number of binary bits. C λ ( ) = b l( ) where l( ) is the codeword length for source sequence. The first bit is always and the rest of the codewords are the Shannon codewords based on the λ tilted distribution of p l( p ( ) +λ ) = + log 2 Ps p (s ) +λ 2 i= log 2 p ( i) +λ Ps p(s) +λ From (3), the longest code length l λ is l λ 2 log 2 p +λ min Ps p(s) +λ (3) (4) Where p min = min p (). The constant 2 is insignificant compared to. This variable-length code is turned into a fied rate code as follows: 3

4 Where the rate function is [9]: I() = inf D(ν p y ) ν: P Σ i= ν if(σ i )= ν is a distribution defined on Σ. It is shown in [9] that: Fig. 4. umber of bits in the buffer: t Definition 4: The sequential source coding scheme is illustrated in Figure 3. At time k, k =, 2,... the encoder E uses the variable length code C λ to encode the k th source block k = k (k )+ into a binary sequence b(k), b(k) 2,...b(k) l( k ). This codeword is pushed into a FIFO queue with infinite buffer-size. The encoder drains a bit from the queue every seconds. If the queue is empty, the encoder simply sends 0 s to the decoder until there are new bits pushed in the queue. The decoder knows the variable length code book and the prefi-free nature of the Shannon code guarantees that everything can be decoded correctly. B. Sequential error events The number of the bits k in the encoder buffer at time k, is a random walk process with negative drift and a reflecting barrier. An eample sample-path is illustrated in Figure 4. At time (k + d), the decoder can make an error in estimating k if and only if part of the variable length code for source block k is still in the encoder buffer. Since the FIFO queue drains deterministically, it means that when the k-th block s codeword entered the queue, it was already doomed to miss its deadline of d. Formally, for an error to occur, the number of bits in the buffer k d. Thus, meeting a specific end-to-end latency constraint over a fiedrate noiseless link is like the buffer overflow events analyzed in [8]. Define the random time t k to be the last time before time k when the queue was empty. A missed-deadline will occur only if P k i=t k + l( i) > (d + k). For arbitrary t k, define the error event k (t) = P ( l( i) > (d + )) i=t Using the following lemma, we will derive a tight, in the large deviation sense, upper bound on (t). Lemma 2: Cramáer theorem [9] Consider iid random variables Y i Σ = {σ,...σ Σ }, P Y i p y and f : Σ +, i = f(y i). Write S n = n n i= i. For any closed set F +. P (S n F ) (n + ) Σ 2 n inf F I() The initial is not really required since the decoder knows the rate at which source-symbols are arriving at the encoder. Thus, it knows when the queue is empty and does not need to even interpret the 0s it receives. Σ I() = sup{ρ log 2 ( ρ p yi 2 ρf(σi) )} i= Write I(, ρ) = ρ log 2 ( P Σ i= py i 2ρf(σ i) ), then > 0, ρ < 0, I(, ρ) < 0. Obviously I(, 0) = 0, which means that the ρ to maimize I(, ρ) is positive. This implies that I() is monotonically increasing with. Using the definition of l( ): log 2 ( p ( )2 λl( ) )} 2λ + log 2 [( p () +λ )( p () +λ ) λ ] = 2λ + ( + λ) log 2 [ p () +λ ] = 2λ + E 0(λ) (5) The 2λ is insignificant and so as a simple corollary of the Cramáer theorem, we have an upper bound on (t). (t) = P ( k Where: i=t+ = P ( λ l( i) (d + )) k i=t+ l( i) () 2 E (C λ,,k t,d) E (C λ,,, d) (d + ) () sup {ρ ρ + (d + ) ) log 2 ( p ( )2 ρl( ) )} (d + ) ()[λ log 2 ( p ( )2 λl( ) )] (d + ) ()(λ 2λ E0(λ)) = dλ + ()[G (λ) 2λ ] (6) C. Lower bound on E s() Theorem 2: For any ɛ > 0, by appropriate choice of, λ, it is possible to achieve an error eponent with delay of E 0( ) ɛ universally over large enough delays d for the satisfying = E 0( ). Proof: For the sequential source coding scheme of C,λ, the decoding error for the k th source block at time (k + d) is. t k is the last time before k when the buffer is empty. 4

5 = k P (t k = t, k P ( k i=t+ k i=t+ l( i) (d + )) l( i) (d + )) k ( + ) 2 E (C λ,,k t,d) k = 2 dλ ( + ) 2 (k t)[g (λ) λ ] The above equality is true for all, λ. Pick λ = ɛ. From the discussion at the end of previous section, we know that G (λ) > 0. Choose >. Then define K(ɛ,, ) = λ G (λ) (i + ) 2 i[g (λ) λ ] < i=0 which is guaranteed to be finite since dying eponentials dominate polynomials in sums. Thus: K(ɛ)2 dλ = K(ɛ,, )2 d(ρ ɛ) where the constant K(ɛ, ) does not depend on the delay in question. Since E 0( ) = and the encoder also is not targeted to the delay d, this scheme achieves the desired eponent delay-universally. This theorem effectively proves that Es () = E 0( ) is asymptotically achievable. For every realization of, the codelengths of this code differ from the optimal universal code by at most O(log()), which is insignificant compared to. Thus, this delay-eponent can also be attained universally over both discrete-memoryless sources and delays d. IV. A EAMPLE OF SUBOPTIMAL CODIG Large are not needed to do substantially better than blockcoding in the fied delay contet. The advantages of variablelength encoding are so great that even very simple VL-codes will outperform the best possible block-code. Consider the following ternary eample from [0]. Suppose p (a) = 0.9, p (b) = 0.05 and p (c) = Consider rate = 3. The best possible block coding error eponent is.474. Consider the following sub-optimal 2 variable length sequential coding scheme with = 2. Map (a, a) to 00, and other source symbol pairs are mapped to 000, 00,... Simple birth-death Markov chain analysis reveals that this achieves an error eponent of 6.27 with fieddelay despite being suboptimal even as a VL-code! V. COCLUSIOS AD FUTUE WOK The lossless source-coding version of the uncertainty focusing bound was developed and used to show that fied-tovariable length coding is optimal from an end-to-end latency point of view, even when the deadlines are specified in terms of a fied latency. In a very precise sense, lossless sourcecoding is like channel coding with feedback and using blockcodes results in a substantial loss in the error eponents. While the parametric form of the focusing bound is the same in the channel-coding and source-coding cases, the interpretation is a bit different. In source-coding, the dominant error events always involve only the past. As the rate varies, the only change is how much of the past is involved. For channel coding with feedback, both the past and the future are involved since the past is now known while the future remains only partially controllable. In channel-coding without feedback, only the future is involved since the past is entirely unknown. A similar story should hold for the error eponents of pointto-point lossy source coding. Once again, the source-coding contet guarantees that only the past behavior will matter in the dominant error event. The results here hint that VQ followed by variable-rate entropy-coding may also be optimal in the fied-delay setting. A more interesting direction is in etending our understanding of sequential distributed coding for correlated sources (Slepian-Wolf source coding). So far, we have only achievable eponents in general [7] and a good upper bound only for symmetric cases with side-information at the receiver []. For the case considered in [], it turns out that only the future behavior of the source matters. We suspect that depending on the rate point and the nature of the source, different combinations of the past and future will be involved in the dominant error event and the optimal codes will in general have a mied nature involving both queuing and binning. ACKOWLEDGMETS The authors thank Sanjoy Mitter, Tunc Simsek, Stark Draper, and Sandeep Pradhan for helpful discussions along the way and acknowledge the support of SF IT Award: CS in making this work possible. EFEECES [] C. E. Shannon, Coding theorems for a discrete source with a fidelity criterion, IE ational Convention ecord, vol. 7, no. 4, pp , 959. [2] M. S. Pinsker, Bounds on the probability and of the number of correctable errors for nonblock codes, Problemy Peredachi Informatsii, vol. 3, no. 4, pp , Oct./Dec [3] A. Sahai, Why block length and delay are not the same thing, IEEE Trans. Inform. Theory, submitted. [Online]. Available: sahai/papers/focusingbound.pdf [4], How to beat the sphere-packing bound with feedback, submitted to ISIT, [5] I. Csiszár and J. Körner, Information Theory. Budapest: Akadémiai Kiadó, 986. [6]. G. Gallager, Information Theory and eliable Communication. ew York, Y: John Wiley, 97. [7] C. Chang, S. Draper, and A. Sahai, Sequential random binning for streaming distributed source coding, In preparation. [8] F. Jelinek, Buffer overflow in variable length coding of fied rate sources, IEEE Transactions on Information Theory, vol. 4, pp , 968. [9] A. Dembo and O. Zeitouni, Large Deviations Techniques and Applications. Springer, 998. [0] S. Draper, C. Chang, and A. Sahai, Sequential random binning for streaming distributed source coding, ISIT, [] C. Chang and A. Sahai, Upper bound on error eponents with delay for lossless source coding with side-information, submitted to ISIT,

Delay, feedback, and the price of ignorance

Delay, feedback, and the price of ignorance Delay, feedback, and the price of ignorance Anant Sahai based in part on joint work with students: Tunc Simsek Cheng Chang Wireless Foundations Department of Electrical Engineering and Computer Sciences

More information

Universal Anytime Codes: An approach to uncertain channels in control

Universal Anytime Codes: An approach to uncertain channels in control Universal Anytime Codes: An approach to uncertain channels in control paper by Stark Draper and Anant Sahai presented by Sekhar Tatikonda Wireless Foundations Department of Electrical Engineering and Computer

More information

Feedback and Side-Information in Information Theory

Feedback and Side-Information in Information Theory Feedback and Side-Information in Information Theory Anant Sahai and Sekhar Tatikonda UC Berkeley and Yale sahai@eecs.berkeley.edu and sekhar.tatikonda@yale.edu ISIT 27 Tutorial T2 Nice, France June 24,

More information

Arimoto Channel Coding Converse and Rényi Divergence

Arimoto Channel Coding Converse and Rényi Divergence Arimoto Channel Coding Converse and Rényi Divergence Yury Polyanskiy and Sergio Verdú Abstract Arimoto proved a non-asymptotic upper bound on the probability of successful decoding achievable by any code

More information

Attaining maximal reliability with minimal feedback via joint channel-code and hash-function design

Attaining maximal reliability with minimal feedback via joint channel-code and hash-function design Attaining maimal reliability with minimal feedback via joint channel-code and hash-function design Stark C. Draper, Kannan Ramchandran, Biio Rimoldi, Anant Sahai, and David N. C. Tse Department of EECS,

More information

Lecture 4 Channel Coding

Lecture 4 Channel Coding Capacity and the Weak Converse Lecture 4 Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 15, 2014 1 / 16 I-Hsiang Wang NIT Lecture 4 Capacity

More information

The Hallucination Bound for the BSC

The Hallucination Bound for the BSC The Hallucination Bound for the BSC Anant Sahai and Stark Draper Wireless Foundations Department of Electrical Engineering and Computer Sciences University of California at Berkeley ECE Department University

More information

Training-Based Schemes are Suboptimal for High Rate Asynchronous Communication

Training-Based Schemes are Suboptimal for High Rate Asynchronous Communication Training-Based Schemes are Suboptimal for High Rate Asynchronous Communication The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation

More information

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University Chapter 4 Data Transmission and Channel Capacity Po-Ning Chen, Professor Department of Communications Engineering National Chiao Tung University Hsin Chu, Taiwan 30050, R.O.C. Principle of Data Transmission

More information

Upper Bounds on the Capacity of Binary Intermittent Communication

Upper Bounds on the Capacity of Binary Intermittent Communication Upper Bounds on the Capacity of Binary Intermittent Communication Mostafa Khoshnevisan and J. Nicholas Laneman Department of Electrical Engineering University of Notre Dame Notre Dame, Indiana 46556 Email:{mhoshne,

More information

On the variable-delay reliability function of discrete memoryless channels with access to noisy feedback

On the variable-delay reliability function of discrete memoryless channels with access to noisy feedback ITW2004, San Antonio, Texas, October 24 29, 2004 On the variable-delay reliability function of discrete memoryless channels with access to noisy feedback Anant Sahai and Tunç Şimşek Electrical Engineering

More information

Lecture 4 Noisy Channel Coding

Lecture 4 Noisy Channel Coding Lecture 4 Noisy Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 9, 2015 1 / 56 I-Hsiang Wang IT Lecture 4 The Channel Coding Problem

More information

Reliable Computation over Multiple-Access Channels

Reliable Computation over Multiple-Access Channels Reliable Computation over Multiple-Access Channels Bobak Nazer and Michael Gastpar Dept. of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley, CA, 94720-1770 {bobak,

More information

The Poisson Channel with Side Information

The Poisson Channel with Side Information The Poisson Channel with Side Information Shraga Bross School of Enginerring Bar-Ilan University, Israel brosss@macs.biu.ac.il Amos Lapidoth Ligong Wang Signal and Information Processing Laboratory ETH

More information

Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels

Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels Nan Liu and Andrea Goldsmith Department of Electrical Engineering Stanford University, Stanford CA 94305 Email:

More information

Feedback Capacity of a Class of Symmetric Finite-State Markov Channels

Feedback Capacity of a Class of Symmetric Finite-State Markov Channels Feedback Capacity of a Class of Symmetric Finite-State Markov Channels Nevroz Şen, Fady Alajaji and Serdar Yüksel Department of Mathematics and Statistics Queen s University Kingston, ON K7L 3N6, Canada

More information

5218 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 12, DECEMBER 2006

5218 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 12, DECEMBER 2006 5218 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 12, DECEMBER 2006 Source Coding With Limited-Look-Ahead Side Information at the Decoder Tsachy Weissman, Member, IEEE, Abbas El Gamal, Fellow,

More information

Multiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets

Multiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets Multiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets Shivaprasad Kotagiri and J. Nicholas Laneman Department of Electrical Engineering University of Notre Dame Notre Dame,

More information

Coding into a source: an inverse rate-distortion theorem

Coding into a source: an inverse rate-distortion theorem Coding into a source: an inverse rate-distortion theorem Anant Sahai joint work with: Mukul Agarwal Sanjoy K. Mitter Wireless Foundations Department of Electrical Engineering and Computer Sciences University

More information

Lossless Source Coding

Lossless Source Coding Chapter 2 Lossless Source Coding We begin this chapter by describing the general source coding scenario in Section 2.1. Section 2.2 introduces the most rudimentary kind of source codes, called fixed-length

More information

Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information

Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information 204 IEEE International Symposium on Information Theory Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information Omur Ozel, Kaya Tutuncuoglu 2, Sennur Ulukus, and Aylin Yener

More information

A Simple Memoryless Proof of the Capacity of the Exponential Server Timing Channel

A Simple Memoryless Proof of the Capacity of the Exponential Server Timing Channel A Simple Memoryless Proof of the Capacity of the Exponential Server iming Channel odd P. Coleman ECE Department Coordinated Science Laboratory University of Illinois colemant@illinois.edu Abstract his

More information

4 An Introduction to Channel Coding and Decoding over BSC

4 An Introduction to Channel Coding and Decoding over BSC 4 An Introduction to Channel Coding and Decoding over BSC 4.1. Recall that channel coding introduces, in a controlled manner, some redundancy in the (binary information sequence that can be used at the

More information

Universal Incremental Slepian-Wolf Coding

Universal Incremental Slepian-Wolf Coding Proceedings of the 43rd annual Allerton Conference, Monticello, IL, September 2004 Universal Incremental Slepian-Wolf Coding Stark C. Draper University of California, Berkeley Berkeley, CA, 94720 USA sdraper@eecs.berkeley.edu

More information

Lattices for Distributed Source Coding: Jointly Gaussian Sources and Reconstruction of a Linear Function

Lattices for Distributed Source Coding: Jointly Gaussian Sources and Reconstruction of a Linear Function Lattices for Distributed Source Coding: Jointly Gaussian Sources and Reconstruction of a Linear Function Dinesh Krithivasan and S. Sandeep Pradhan Department of Electrical Engineering and Computer Science,

More information

Relaying Information Streams

Relaying Information Streams Relaying Information Streams Anant Sahai UC Berkeley EECS sahai@eecs.berkeley.edu Originally given: Oct 2, 2002 This was a talk only. I was never able to rigorously formalize the back-of-the-envelope reasoning

More information

AN INFORMATION THEORY APPROACH TO WIRELESS SENSOR NETWORK DESIGN

AN INFORMATION THEORY APPROACH TO WIRELESS SENSOR NETWORK DESIGN AN INFORMATION THEORY APPROACH TO WIRELESS SENSOR NETWORK DESIGN A Thesis Presented to The Academic Faculty by Bryan Larish In Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy

More information

The Gallager Converse

The Gallager Converse The Gallager Converse Abbas El Gamal Director, Information Systems Laboratory Department of Electrical Engineering Stanford University Gallager s 75th Birthday 1 Information Theoretic Limits Establishing

More information

Equivalence for Networks with Adversarial State

Equivalence for Networks with Adversarial State Equivalence for Networks with Adversarial State Oliver Kosut Department of Electrical, Computer and Energy Engineering Arizona State University Tempe, AZ 85287 Email: okosut@asu.edu Jörg Kliewer Department

More information

Appendix B Information theory from first principles

Appendix B Information theory from first principles Appendix B Information theory from first principles This appendix discusses the information theory behind the capacity expressions used in the book. Section 8.3.4 is the only part of the book that supposes

More information

6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011

6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011 6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011 On the Structure of Real-Time Encoding and Decoding Functions in a Multiterminal Communication System Ashutosh Nayyar, Student

More information

Interference channel capacity region for randomized fixed-composition codes

Interference channel capacity region for randomized fixed-composition codes Interference channel capacity region for randomized fixed-composition codes Cheng Chang D. E. Shaw & Co, New York, NY. 20 West 45th Street 39th Floor New York, NY 0036 cchang@eecs.berkeley.edu Abstract

More information

Upper Bounds to Error Probability with Feedback

Upper Bounds to Error Probability with Feedback Upper Bounds to Error robability with Feedbac Barış Naiboğlu Lizhong Zheng Research Laboratory of Electronics at MIT Cambridge, MA, 0239 Email: {naib, lizhong }@mit.edu Abstract A new technique is proposed

More information

Lecture 11: Quantum Information III - Source Coding

Lecture 11: Quantum Information III - Source Coding CSCI5370 Quantum Computing November 25, 203 Lecture : Quantum Information III - Source Coding Lecturer: Shengyu Zhang Scribe: Hing Yin Tsang. Holevo s bound Suppose Alice has an information source X that

More information

Intermittent Communication

Intermittent Communication Intermittent Communication Mostafa Khoshnevisan, Student Member, IEEE, and J. Nicholas Laneman, Senior Member, IEEE arxiv:32.42v2 [cs.it] 7 Mar 207 Abstract We formulate a model for intermittent communication

More information

Control Over Noisy Channels

Control Over Noisy Channels IEEE RANSACIONS ON AUOMAIC CONROL, VOL??, NO??, MONH?? 004 Control Over Noisy Channels Sekhar atikonda, Member, IEEE, and Sanjoy Mitter, Fellow, IEEE, Abstract Communication is an important component of

More information

Lecture 5 Channel Coding over Continuous Channels

Lecture 5 Channel Coding over Continuous Channels Lecture 5 Channel Coding over Continuous Channels I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw November 14, 2014 1 / 34 I-Hsiang Wang NIT Lecture 5 From

More information

Characterization of Information Channels for Asymptotic Mean Stationarity and Stochastic Stability of Nonstationary/Unstable Linear Systems

Characterization of Information Channels for Asymptotic Mean Stationarity and Stochastic Stability of Nonstationary/Unstable Linear Systems 6332 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 58, NO 10, OCTOBER 2012 Characterization of Information Channels for Asymptotic Mean Stationarity and Stochastic Stability of Nonstationary/Unstable Linear

More information

On the Rate-Limited Gelfand-Pinsker Problem

On the Rate-Limited Gelfand-Pinsker Problem On the Rate-Limited Gelfand-Pinsker Problem Ravi Tandon Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College Park, MD 74 ravit@umd.edu ulukus@umd.edu Abstract

More information

EE5139R: Problem Set 7 Assigned: 30/09/15, Due: 07/10/15

EE5139R: Problem Set 7 Assigned: 30/09/15, Due: 07/10/15 EE5139R: Problem Set 7 Assigned: 30/09/15, Due: 07/10/15 1. Cascade of Binary Symmetric Channels The conditional probability distribution py x for each of the BSCs may be expressed by the transition probability

More information

Variable Length Codes for Degraded Broadcast Channels

Variable Length Codes for Degraded Broadcast Channels Variable Length Codes for Degraded Broadcast Channels Stéphane Musy School of Computer and Communication Sciences, EPFL CH-1015 Lausanne, Switzerland Email: stephane.musy@ep.ch Abstract This paper investigates

More information

AN INTRODUCTION TO SECRECY CAPACITY. 1. Overview

AN INTRODUCTION TO SECRECY CAPACITY. 1. Overview AN INTRODUCTION TO SECRECY CAPACITY BRIAN DUNN. Overview This paper introduces the reader to several information theoretic aspects of covert communications. In particular, it discusses fundamental limits

More information

Convexity/Concavity of Renyi Entropy and α-mutual Information

Convexity/Concavity of Renyi Entropy and α-mutual Information Convexity/Concavity of Renyi Entropy and -Mutual Information Siu-Wai Ho Institute for Telecommunications Research University of South Australia Adelaide, SA 5095, Australia Email: siuwai.ho@unisa.edu.au

More information

On Scalable Source Coding for Multiple Decoders with Side Information

On Scalable Source Coding for Multiple Decoders with Side Information On Scalable Source Coding for Multiple Decoders with Side Information Chao Tian School of Computer and Communication Sciences Laboratory for Information and Communication Systems (LICOS), EPFL, Lausanne,

More information

Lecture 11: Continuous-valued signals and differential entropy

Lecture 11: Continuous-valued signals and differential entropy Lecture 11: Continuous-valued signals and differential entropy Biology 429 Carl Bergstrom September 20, 2008 Sources: Parts of today s lecture follow Chapter 8 from Cover and Thomas (2007). Some components

More information

Energy State Amplification in an Energy Harvesting Communication System

Energy State Amplification in an Energy Harvesting Communication System Energy State Amplification in an Energy Harvesting Communication System Omur Ozel Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland College Park, MD 20742 omur@umd.edu

More information

EE5585 Data Compression May 2, Lecture 27

EE5585 Data Compression May 2, Lecture 27 EE5585 Data Compression May 2, 2013 Lecture 27 Instructor: Arya Mazumdar Scribe: Fangying Zhang Distributed Data Compression/Source Coding In the previous class we used a H-W table as a simple example,

More information

Tight Upper Bounds on the Redundancy of Optimal Binary AIFV Codes

Tight Upper Bounds on the Redundancy of Optimal Binary AIFV Codes Tight Upper Bounds on the Redundancy of Optimal Binary AIFV Codes Weihua Hu Dept. of Mathematical Eng. Email: weihua96@gmail.com Hirosuke Yamamoto Dept. of Complexity Sci. and Eng. Email: Hirosuke@ieee.org

More information

RECENT advances in technology have led to increased activity

RECENT advances in technology have led to increased activity IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 49, NO 9, SEPTEMBER 2004 1549 Stochastic Linear Control Over a Communication Channel Sekhar Tatikonda, Member, IEEE, Anant Sahai, Member, IEEE, and Sanjoy Mitter,

More information

Lecture 11: Polar codes construction

Lecture 11: Polar codes construction 15-859: Information Theory and Applications in TCS CMU: Spring 2013 Lecturer: Venkatesan Guruswami Lecture 11: Polar codes construction February 26, 2013 Scribe: Dan Stahlke 1 Polar codes: recap of last

More information

Midterm Exam Information Theory Fall Midterm Exam. Time: 09:10 12:10 11/23, 2016

Midterm Exam Information Theory Fall Midterm Exam. Time: 09:10 12:10 11/23, 2016 Midterm Exam Time: 09:10 12:10 11/23, 2016 Name: Student ID: Policy: (Read before You Start to Work) The exam is closed book. However, you are allowed to bring TWO A4-size cheat sheet (single-sheet, two-sided).

More information

Source coding and channel requirements for unstable processes. Anant Sahai, Sanjoy Mitter

Source coding and channel requirements for unstable processes. Anant Sahai, Sanjoy Mitter Source coding and channel requirements for unstable processes Anant Sahai, Sanjoy Mitter sahai@eecs.berkeley.edu, mitter@mit.edu Abstract Our understanding of information in systems has been based on the

More information

Transmitting k samples over the Gaussian channel: energy-distortion tradeoff

Transmitting k samples over the Gaussian channel: energy-distortion tradeoff Transmitting samples over the Gaussian channel: energy-distortion tradeoff Victoria Kostina Yury Polyansiy Sergio Verdú California Institute of Technology Massachusetts Institute of Technology Princeton

More information

A Novel Asynchronous Communication Paradigm: Detection, Isolation, and Coding

A Novel Asynchronous Communication Paradigm: Detection, Isolation, and Coding A Novel Asynchronous Communication Paradigm: Detection, Isolation, and Coding The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation

More information

Entropy as a measure of surprise

Entropy as a measure of surprise Entropy as a measure of surprise Lecture 5: Sam Roweis September 26, 25 What does information do? It removes uncertainty. Information Conveyed = Uncertainty Removed = Surprise Yielded. How should we quantify

More information

ECE Advanced Communication Theory, Spring 2009 Homework #1 (INCOMPLETE)

ECE Advanced Communication Theory, Spring 2009 Homework #1 (INCOMPLETE) ECE 74 - Advanced Communication Theory, Spring 2009 Homework #1 (INCOMPLETE) 1. A Huffman code finds the optimal codeword to assign to a given block of source symbols. (a) Show that cannot be a Huffman

More information

Lecture 6 I. CHANNEL CODING. X n (m) P Y X

Lecture 6 I. CHANNEL CODING. X n (m) P Y X 6- Introduction to Information Theory Lecture 6 Lecturer: Haim Permuter Scribe: Yoav Eisenberg and Yakov Miron I. CHANNEL CODING We consider the following channel coding problem: m = {,2,..,2 nr} Encoder

More information

(each row defines a probability distribution). Given n-strings x X n, y Y n we can use the absence of memory in the channel to compute

(each row defines a probability distribution). Given n-strings x X n, y Y n we can use the absence of memory in the channel to compute ENEE 739C: Advanced Topics in Signal Processing: Coding Theory Instructor: Alexander Barg Lecture 6 (draft; 9/6/03. Error exponents for Discrete Memoryless Channels http://www.enee.umd.edu/ abarg/enee739c/course.html

More information

An introduction to basic information theory. Hampus Wessman

An introduction to basic information theory. Hampus Wessman An introduction to basic information theory Hampus Wessman Abstract We give a short and simple introduction to basic information theory, by stripping away all the non-essentials. Theoretical bounds on

More information

Representation of Correlated Sources into Graphs for Transmission over Broadcast Channels

Representation of Correlated Sources into Graphs for Transmission over Broadcast Channels Representation of Correlated s into Graphs for Transmission over Broadcast s Suhan Choi Department of Electrical Eng. and Computer Science University of Michigan, Ann Arbor, MI 80, USA Email: suhanc@eecs.umich.edu

More information

On The Binary Lossless Many-Help-One Problem with Independently Degraded Helpers

On The Binary Lossless Many-Help-One Problem with Independently Degraded Helpers On The Binary Lossless Many-Help-One Problem with Independently Degraded Helpers Albrecht Wolf, Diana Cristina González, Meik Dörpinghaus, José Cândido Silveira Santos Filho, and Gerhard Fettweis Vodafone

More information

Dispersion of the Gilbert-Elliott Channel

Dispersion of the Gilbert-Elliott Channel Dispersion of the Gilbert-Elliott Channel Yury Polyanskiy Email: ypolyans@princeton.edu H. Vincent Poor Email: poor@princeton.edu Sergio Verdú Email: verdu@princeton.edu Abstract Channel dispersion plays

More information

SOURCE CODING WITH SIDE INFORMATION AT THE DECODER (WYNER-ZIV CODING) FEB 13, 2003

SOURCE CODING WITH SIDE INFORMATION AT THE DECODER (WYNER-ZIV CODING) FEB 13, 2003 SOURCE CODING WITH SIDE INFORMATION AT THE DECODER (WYNER-ZIV CODING) FEB 13, 2003 SLEPIAN-WOLF RESULT { X i} RATE R x ENCODER 1 DECODER X i V i {, } { V i} ENCODER 0 RATE R v Problem: Determine R, the

More information

THIS paper is aimed at designing efficient decoding algorithms

THIS paper is aimed at designing efficient decoding algorithms IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 7, NOVEMBER 1999 2333 Sort-and-Match Algorithm for Soft-Decision Decoding Ilya Dumer, Member, IEEE Abstract Let a q-ary linear (n; k)-code C be used

More information

Estimating a linear process using phone calls

Estimating a linear process using phone calls Estimating a linear process using phone calls Mohammad Javad Khojasteh, Massimo Franceschetti, Gireeja Ranade Abstract We consider the problem of estimating an undisturbed, scalar, linear process over

More information

Error Exponent Region for Gaussian Broadcast Channels

Error Exponent Region for Gaussian Broadcast Channels Error Exponent Region for Gaussian Broadcast Channels Lihua Weng, S. Sandeep Pradhan, and Achilleas Anastasopoulos Electrical Engineering and Computer Science Dept. University of Michigan, Ann Arbor, MI

More information

The connection between information theory and networked control

The connection between information theory and networked control The connection between information theory and networked control Anant Sahai based in part on joint work with students: Tunc Simsek, Hari Palaiyanur, and Pulkit Grover Wireless Foundations Department of

More information

Entropy and Ergodic Theory Lecture 3: The meaning of entropy in information theory

Entropy and Ergodic Theory Lecture 3: The meaning of entropy in information theory Entropy and Ergodic Theory Lecture 3: The meaning of entropy in information theory 1 The intuitive meaning of entropy Modern information theory was born in Shannon s 1948 paper A Mathematical Theory of

More information

Stabilization over discrete memoryless and wideband channels using nearly memoryless observations

Stabilization over discrete memoryless and wideband channels using nearly memoryless observations Stabilization over discrete memoryless and wideband channels using nearly memoryless observations Anant Sahai Abstract We study stabilization of a discrete-time scalar unstable plant over a noisy communication

More information

Minimum Energy Per Bit for Secret Key Acquisition Over Multipath Wireless Channels

Minimum Energy Per Bit for Secret Key Acquisition Over Multipath Wireless Channels Minimum Energy Per Bit for Secret Key Acquisition Over Multipath Wireless Channels Tzu-Han Chou Email: tchou@wisc.edu Akbar M. Sayeed Email: akbar@engr.wisc.edu Stark C. Draper Email: sdraper@ece.wisc.edu

More information

5958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 12, DECEMBER 2010

5958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 12, DECEMBER 2010 5958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 12, DECEMBER 2010 Capacity Theorems for Discrete, Finite-State Broadcast Channels With Feedback and Unidirectional Receiver Cooperation Ron Dabora

More information

lossless, optimal compressor

lossless, optimal compressor 6. Variable-length Lossless Compression The principal engineering goal of compression is to represent a given sequence a, a 2,..., a n produced by a source as a sequence of bits of minimal possible length.

More information

A new converse in rate-distortion theory

A new converse in rate-distortion theory A new converse in rate-distortion theory Victoria Kostina, Sergio Verdú Dept. of Electrical Engineering, Princeton University, NJ, 08544, USA Abstract This paper shows new finite-blocklength converse bounds

More information

National University of Singapore Department of Electrical & Computer Engineering. Examination for

National University of Singapore Department of Electrical & Computer Engineering. Examination for National University of Singapore Department of Electrical & Computer Engineering Examination for EE5139R Information Theory for Communication Systems (Semester I, 2014/15) November/December 2014 Time Allowed:

More information

CSCI 2570 Introduction to Nanocomputing

CSCI 2570 Introduction to Nanocomputing CSCI 2570 Introduction to Nanocomputing Information Theory John E Savage What is Information Theory Introduced by Claude Shannon. See Wikipedia Two foci: a) data compression and b) reliable communication

More information

Polar Codes for Some Multi-terminal Communications Problems

Polar Codes for Some Multi-terminal Communications Problems Polar Codes for ome Multi-terminal Communications Problems Aria G. ahebi and. andeep Pradhan Department of Electrical Engineering and Computer cience, niversity of Michigan, Ann Arbor, MI 48109, A. Email:

More information

Anytime Capacity of the AWGN+Erasure Channel with Feedback. Qing Xu. B.S. (Beijing University) 1997 M.S. (University of California at Berkeley) 2000

Anytime Capacity of the AWGN+Erasure Channel with Feedback. Qing Xu. B.S. (Beijing University) 1997 M.S. (University of California at Berkeley) 2000 Anytime Capacity of the AWGN+Erasure Channel with Feedback by Qing Xu B.S. (Beijing University) 1997 M.S. (University of California at Berkeley) 2000 A dissertation submitted in partial satisfaction of

More information

On the Throughput, Capacity and Stability Regions of Random Multiple Access over Standard Multi-Packet Reception Channels

On the Throughput, Capacity and Stability Regions of Random Multiple Access over Standard Multi-Packet Reception Channels On the Throughput, Capacity and Stability Regions of Random Multiple Access over Standard Multi-Packet Reception Channels Jie Luo, Anthony Ephremides ECE Dept. Univ. of Maryland College Park, MD 20742

More information

A Graph-based Framework for Transmission of Correlated Sources over Multiple Access Channels

A Graph-based Framework for Transmission of Correlated Sources over Multiple Access Channels A Graph-based Framework for Transmission of Correlated Sources over Multiple Access Channels S. Sandeep Pradhan a, Suhan Choi a and Kannan Ramchandran b, a {pradhanv,suhanc}@eecs.umich.edu, EECS Dept.,

More information

SHARED INFORMATION. Prakash Narayan with. Imre Csiszár, Sirin Nitinawarat, Himanshu Tyagi, Shun Watanabe

SHARED INFORMATION. Prakash Narayan with. Imre Csiszár, Sirin Nitinawarat, Himanshu Tyagi, Shun Watanabe SHARED INFORMATION Prakash Narayan with Imre Csiszár, Sirin Nitinawarat, Himanshu Tyagi, Shun Watanabe 2/40 Acknowledgement Praneeth Boda Himanshu Tyagi Shun Watanabe 3/40 Outline Two-terminal model: Mutual

More information

Simple Channel Coding Bounds

Simple Channel Coding Bounds ISIT 2009, Seoul, Korea, June 28 - July 3,2009 Simple Channel Coding Bounds Ligong Wang Signal and Information Processing Laboratory wang@isi.ee.ethz.ch Roger Colbeck Institute for Theoretical Physics,

More information

Shannon s Noisy-Channel Coding Theorem

Shannon s Noisy-Channel Coding Theorem Shannon s Noisy-Channel Coding Theorem Lucas Slot Sebastian Zur February 13, 2015 Lucas Slot, Sebastian Zur Shannon s Noisy-Channel Coding Theorem February 13, 2015 1 / 29 Outline 1 Definitions and Terminology

More information

Chapter 9 Fundamental Limits in Information Theory

Chapter 9 Fundamental Limits in Information Theory Chapter 9 Fundamental Limits in Information Theory Information Theory is the fundamental theory behind information manipulation, including data compression and data transmission. 9.1 Introduction o For

More information

Information measures in simple coding problems

Information measures in simple coding problems Part I Information measures in simple coding problems in this web service in this web service Source coding and hypothesis testing; information measures A(discrete)source is a sequence {X i } i= of random

More information

A Simple Encoding And Decoding Strategy For Stabilization Over Discrete Memoryless Channels

A Simple Encoding And Decoding Strategy For Stabilization Over Discrete Memoryless Channels A Simple Encoding And Decoding Strategy For Stabilization Over Discrete Memoryless Channels Anant Sahai and Hari Palaiyanur Dept. of Electrical Engineering and Computer Sciences University of California,

More information

A Summary of Multiple Access Channels

A Summary of Multiple Access Channels A Summary of Multiple Access Channels Wenyi Zhang February 24, 2003 Abstract In this summary we attempt to present a brief overview of the classical results on the information-theoretic aspects of multiple

More information

Channels with cost constraints: strong converse and dispersion

Channels with cost constraints: strong converse and dispersion Channels with cost constraints: strong converse and dispersion Victoria Kostina, Sergio Verdú Dept. of Electrical Engineering, Princeton University, NJ 08544, USA Abstract This paper shows the strong converse

More information

for some error exponent E( R) as a function R,

for some error exponent E( R) as a function R, . Capacity-achieving codes via Forney concatenation Shannon s Noisy Channel Theorem assures us the existence of capacity-achieving codes. However, exhaustive search for the code has double-exponential

More information

The Capacity Region of the Gaussian Cognitive Radio Channels at High SNR

The Capacity Region of the Gaussian Cognitive Radio Channels at High SNR The Capacity Region of the Gaussian Cognitive Radio Channels at High SNR 1 Stefano Rini, Daniela Tuninetti and Natasha Devroye srini2, danielat, devroye @ece.uic.edu University of Illinois at Chicago Abstract

More information

18.2 Continuous Alphabet (discrete-time, memoryless) Channel

18.2 Continuous Alphabet (discrete-time, memoryless) Channel 0-704: Information Processing and Learning Spring 0 Lecture 8: Gaussian channel, Parallel channels and Rate-distortion theory Lecturer: Aarti Singh Scribe: Danai Koutra Disclaimer: These notes have not

More information

ECE Information theory Final (Fall 2008)

ECE Information theory Final (Fall 2008) ECE 776 - Information theory Final (Fall 2008) Q.1. (1 point) Consider the following bursty transmission scheme for a Gaussian channel with noise power N and average power constraint P (i.e., 1/n X n i=1

More information

Discrete Memoryless Channels with Memoryless Output Sequences

Discrete Memoryless Channels with Memoryless Output Sequences Discrete Memoryless Channels with Memoryless utput Sequences Marcelo S Pinho Department of Electronic Engineering Instituto Tecnologico de Aeronautica Sao Jose dos Campos, SP 12228-900, Brazil Email: mpinho@ieeeorg

More information

Variable-Rate Universal Slepian-Wolf Coding with Feedback

Variable-Rate Universal Slepian-Wolf Coding with Feedback Variable-Rate Universal Slepian-Wolf Coding with Feedback Shriram Sarvotham, Dror Baron, and Richard G. Baraniuk Dept. of Electrical and Computer Engineering Rice University, Houston, TX 77005 Abstract

More information

Multimedia Communications. Mathematical Preliminaries for Lossless Compression

Multimedia Communications. Mathematical Preliminaries for Lossless Compression Multimedia Communications Mathematical Preliminaries for Lossless Compression What we will see in this chapter Definition of information and entropy Modeling a data source Definition of coding and when

More information

SHARED INFORMATION. Prakash Narayan with. Imre Csiszár, Sirin Nitinawarat, Himanshu Tyagi, Shun Watanabe

SHARED INFORMATION. Prakash Narayan with. Imre Csiszár, Sirin Nitinawarat, Himanshu Tyagi, Shun Watanabe SHARED INFORMATION Prakash Narayan with Imre Csiszár, Sirin Nitinawarat, Himanshu Tyagi, Shun Watanabe 2/41 Outline Two-terminal model: Mutual information Operational meaning in: Channel coding: channel

More information

Joint Source-Channel Coding for the Multiple-Access Relay Channel

Joint Source-Channel Coding for the Multiple-Access Relay Channel Joint Source-Channel Coding for the Multiple-Access Relay Channel Yonathan Murin, Ron Dabora Department of Electrical and Computer Engineering Ben-Gurion University, Israel Email: moriny@bgu.ac.il, ron@ee.bgu.ac.il

More information

Compression and Coding

Compression and Coding Compression and Coding Theory and Applications Part 1: Fundamentals Gloria Menegaz 1 Transmitter (Encoder) What is the problem? Receiver (Decoder) Transformation information unit Channel Ordering (significance)

More information

Second-Order Asymptotics in Information Theory

Second-Order Asymptotics in Information Theory Second-Order Asymptotics in Information Theory Vincent Y. F. Tan (vtan@nus.edu.sg) Dept. of ECE and Dept. of Mathematics National University of Singapore (NUS) National Taiwan University November 2015

More information

LECTURE 10. Last time: Lecture outline

LECTURE 10. Last time: Lecture outline LECTURE 10 Joint AEP Coding Theorem Last time: Error Exponents Lecture outline Strong Coding Theorem Reading: Gallager, Chapter 5. Review Joint AEP A ( ɛ n) (X) A ( ɛ n) (Y ) vs. A ( ɛ n) (X, Y ) 2 nh(x)

More information

Coding into a source: a direct inverse Rate-Distortion theorem

Coding into a source: a direct inverse Rate-Distortion theorem Coding into a source: a direct inverse Rate-Distortion theorem Mukul Agarwal, Anant Sahai, and Sanjoy Mitter Abstract Shannon proved that if we can transmit bits reliably at rates larger than the rate

More information