Upper Bounds to Error Probability with Feedback

Size: px
Start display at page:

Download "Upper Bounds to Error Probability with Feedback"

Transcription

1 Upper Bounds to Error robability with Feedbac Barış Naiboğlu Lizhong Zheng Research Laboratory of Electronics at MIT Cambridge, MA, {naib, lizhong Abstract A new technique is proposed for upper bounding the error probability of fied length bloc codes with feedbac. Error analysis is inspired by Gallager s error analysis for bloc codes without feedbac. Zigangirov-D yachov encoding scheme is analyzed with the technique on binary input channels and -ary symmetric channels. I. INTRODUCTION Shannon showed, [6], that capacity of the discrete memoryless channels DMCs) does not increase with feedbac. Later it was shown that the eponential decay rate of the error probability of fied length bloc codes can not eceed sphere pacing eponent even when there is feedbac, first by Dobrushin, [3], for symmetric channels and later by Sheverdyaev, [7], for general DMCs. In other words for the rates above the critical rate even the error eponent does not increase with feedbac, when we restrict ourselves to the fied length bloc codes. Characterizing the improvement in the error eponent for the rates below the critical rate is the most pressing open question in this stream of research. The first wor on the error analysis bloc codes with feedbac was by Berleamp []. He has obtained a closed form epression of the error eponent at zero rate for binary symmetric channels BSCs). Later Zigangirov [8] proposed an encoding scheme, for BSCs which reaches sphere pacing eponent for all rate over a critical rate R Zcrit. 2 Furthermore at zero-rate Zigangirov s encoding scheme reaches optimal error eponent at zero rate, which is derived by Berleamp in []. Later D yachov, [4], proposed a generalization of the encoding scheme of Zigangirov, and obtained a coding theorem for general DMCs. However the optimization problem in his coding theorem, is quite involved and does not allow for simplifications that will lead us to conclusions about the error eponents of general DMCs. In [4] after pointing out this fact, D yachov focuses on binary input channels and -ary symmetric channels and derives the error eponent epressions for these channels. There are a number of closely related models in which error eponent analysis has been successfully applied, lie variable-length bloc codes, fied length bloc codes with errors-and-erasure decoding, bloc codes on additive white Gaussian noise channels, fied/variable delay code on DMCs. We are refraining from discussing these variants mainly because understanding those variants will not help the reader much in understanding the wor at hand. 2 Evidently R Zcrit < R crit where R crit is the critical rate in the nonfeedbac case, i.e. the rate above which random coding eponent is equal to the sphere pacing eponent. Our approach will be similar to the D yachov s in the sense that we will first prove a coding theorem for general DMCs and then focus on particular cases and demonstrate its gains over non-feedbac encoding schemes. We will start with introducing the channel model we have at hand and the notation we use. After that we will consider the encoding schemes that uses feedbac and mae an error analysis which is very similar to the that of Gallager in [5]. Then we will use our results in different cases, to recover the results of Zigangirov [8] and D yachov [4] for binary input channels and to improve D yachov s results, [4], for -ary symmetric channels. 3 II. CHANNEL MODEL AND NOTATION We have a discrete memoryless channel with input alphabet X {, 2,..., X }, output alphabet Y {, 2,..., Y }. Channel transition probabilities are given by a X -by- Y matri W y ). In addition we assume that a noiseless, delay free feedbac lin eists from transmitter to the receiver. Thus transmitter learns the channel output at time before the transmission of the symbol at time +. A length n bloc code with feedbac for a messages set M {, 2,..., e nr } is a feedbac encoding scheme together with a decoding rule. The feedbac encoding scheme, Ψ, is a mapping from the set of possible output sequences, y j Y j for j {, 2,..., n}, to the set of possible input symbol assignment to the messages in the set M n Ψ : Y j X ) j The input letter for the message m M at time j given y j Y j is the m th element of Ψy j ), i.e., Ψ m y j ). Note that when there is no feedbac Ψy j ) Ψ j, y j Y j. The probability of observing a y i Y i conditioned on message m M is, { y i θ m } i W y j Ψ m y j )) j A decoding rule is simply a mapping from the set of all possible length n output sequences Y n to the message set 3 Indeed same improvement can be obtained within framewor of the analysis introduced by Zigangirov and D yachov with some fairly minor modifications.

2 M. Φ : Y n M We use a maimum a-posteriori probability MA) decoder. When there is a tie the message with the smaller inde is decoded. III. ERROR ANALYSIS The error probability of the message m M is, e,m {y n θ m} I{Φy n ) m} where I{ } is the indicator function. Note that for a MA decoder, for any ρ > 0, and η > 0 we have I{Φy n ) m} m {yn θ } η {y n θ } η Consequently e,m {y n θ m} {y n θ } η m If we introduce the short hand 2 3 Γ ρ,η,jy j ) X ny j o θ m 4 X ny j o η θ 5 m m M and recall that e m e,m e, we get, ) ρ 2) If Γ ρ,η,n y n ) 0 we can divide and multiply by Γ ρ,η,n y n ), i.e, e Thus e y n Y n y n Y n y n Y ξρ, η, y j, Ψ) 3) where ξρ, η, y n, Ψ) is given by, { 0 if Γρ,η,n y n ) 0 ξ y n Y if Γ ρ,η,n y n ) 0 Let us define αρ, η, Ψ) as αρ, η, Ψ) ma j {,...n} ma ξρ, η, y j Y yj, Ψ) 4) j ρ ρ Then e y n Y n Γρ,η,0y0 ) αρ, η, Ψ) n M ρ αρ, η, Ψ) n αρ, η, Ψ) e nρr+ln αρ,η,ψ)) 5) Now what we find an encoding scheme, Ψ, with small αρ, η, Ψ). For that we will focus on the encoding schemes that are repetitions of the same one step encoding function. IV. ONE STE ENCODING FUNCTIONS Let Q be the set -dimensional vectors whose all entries are non-negative. Let Υ be a parametric function, Υ : Q X R If q has at least two non-zero entries then Υ is defined as Υ ρ,η q, χ) y,m else Υ ρ,η q, χ) 0. Let us introduce the short hand, then we have W y χ m) ) η qm m W y χ ) η q ) ρ ϕm y j ) { y j θ m } η m ) ρ 6) ϕm y j ) W y j Ψ m y j )) η ϕm y j ) We can write ξ in terms of Υ as follows, ξρ, η, y j, Ψ) Υ ρ,η ϕ y j ), Ψy j )) 7) Note that ξρ, η, y j, Ψ) depends on the encoding in first j time units only through the dimensional vector ϕ y j ). Thus if we can find a χ q for each q Q such that Υ ρ,η q, χ q ) is small, we can use it to obtain a bloc code with small error probability. These mappings are what we call one step encoding function. An dimensional one step encoding function, X, is a mapping from Q to the X, i.e., We define αρ, η, X) as αρ, η, X) X : Q X ma Υ ρ,η q, Xq)) 8) q Q Lemma : For any ρ > 0, η > 0, dimensional one step encoding function X, and for any n there eists and bloc code such that e e nln αρ,η,x)+ρr) 9) where R n roof: Consider the encoding scheme, Ψ such that, Ψ m y j ) X m ϕ y j )) 0)

3 As a result of equations 4), 7) and 8) we get, αρ, η, Ψ) αρ, η, X) ) Using equation 5) we get the claim of the lemma. When we are calculating the achievable error eponents the role of the minimum of ln αρ, η, X) will be very similar to that of E 0 ρ) in the case without feedbac, [5]. V. EXAMLES: A. Achievability of Random Coding Eponent: In this subsection we will, as a sanity chec, re-derive the achievability of random coding eponent for all DMC using Lemma. Let η +ρ. For any >, at each q Q consider the set of all possible mappings of messages to the input letters, X, and calculate the epected value of Υ ρ,η q, χ), where the probability of each χ X is simply given by X )rχ,) where rχ, ) is the number of messages assigned to input letter X. Then one can show that [ ] E Υ ρ, q, χ) e E0ρ, ) q Q 2) +ρ where E 0 ρ, ) y W y ) +ρ )) +ρ Thus for each q Q there eist at least one χ q X such that Υ ρ, +ρ q, χ q) e E0ρ, ) 3) Thus for the one step encoding function Xq), such that Xq) χ q we have ln αρ, +ρ, X) E 0ρ, ) Using this together with the lemma we can conclude that: Corollary : For any input distribution ) on X, and ρ 0, ], R 0 any n > there eists a length n bloc code of the form given in equation 0) such that. e e n E0ρ, )+ρr) 4) Note that above description is not constructive in the sense that it proves the eistence of a one-step-encoding scheme, X with the desired properties but it does not tell us how to find it. Encoding scheme we will investigate below however does specify a X with the desired properties. B. Zigangirov-D yachov Encoding Scheme: In this subsection we will describe the Zigangirov-D yachov encoding scheme and apply Lemma to this encoding scheme on binary input channels and -ary symmetric channels. This encoding scheme was first described by Zigangirov, [8], for binary symmetric channels than generalized by D yachov to general DMCs. Consider a probability distribution ) on input alphabet X and a q Q. Without loss of generality we can assume that 4 i, j M, if i j then q i q j. Now 4 If this is not the case for a q, we can rearrange the messages m M, according to their q m in decreasing order. If two or more messages have same mass, q, we order them with respect to their inde numbers. we can define mapping χ for a given q and iteratively as follows: γ 0 ) 0 χ j arg min γ j ) supp ) i j:χ i q i γ j ) ) For assigning j M we first calculate for each input letter, X, the total mass of all of the messages that has already been assigned to, γ j ). Then we divide γ j ) s by the corresponding ) values and assign the message j M to the X, for which ) > 0 and γj ) ) is the minimum. If there is a tie we choose the input letter,, with larger ). If there is still a tie, we choose the input letter with smaller inde. ) roperties of Z-D Encoding Scheme: Now let us point out one property of this encoding scheme which will become handy later on. A Zigangirov-D yachov encoding scheme with ), will satisfy, ζ m qχm qm χ m) q ) X m M 5) where q γ,, χ ). In order to see this, simply consider the last message assigned to each input letter X. They will satisfy this property by construction. Since the messages that are assigned to the same letter prior to the last message should have at least the same mass as the last one, they will satisfy the property given in 5) too. Thus for any q Q and any input distribution ), the mapping created by a Zigangirov-D yachov encoding scheme, satisfies q )ζ m 0 X m M 6) In other words, with Zigangirov-D yachov encoding scheme, the mass of the q is distributed over the input letters in such a way that; when we consider all the mass distribution ecept an m M, it is a linear combination of ) and δ, s for χ m. Thus m I{χ }q ζm ) m q + χm I{χ }q ζ m )) For ρ, z ρ is a conve function of z thus E [z] ρ E [z ρ ], thus [ ] ρ m W y χ ) η q ) ρ W y ) η ) Let us define f m as f m y ζ m + χ m W y χ m ) ) q ζ m )) m q W y ) ηρ 7) m W y χ ) η q m q ) ρ

4 Using equation 7) we get, f m ζ m + χ m ζ m [ W y χ m ) ) W y ) η ) y W y χ m ) ) W y ) ηρ q )ζ m),ρ,η) eβχm ma{βχm,ρ,η),µχm ρη)} e y m ζm ρη) q )eµχm where i X, β i, ρ, η) and µ i ρη) are defined as, β i, ρ, η) ln X y W y i) ) µ iρη) ma i, X ln X y Consequently for ρ, η 0, Υ ρ,η q, χ )! ρ X )W y ) η W y i) ) W y ) ρη ] ρ y,m W y χm)) η qm m W y χ ) η q ) ρ m ) ρ m ) ρ f m m ) ρ e ma supp ) ma{β,ρ,η),µ ρη)} Thus for ρ for all input distributions and for all η > 0, ln αρ, η, X ) ma ma{β, ρ, η), µ ρη)} 8) supp ) For certain channels property given in equation 8) together with Lemma implies that sphere pacing eponent is achievable on an interval of the form [R Dcrit, R crit ]. Corollary 2: If for a DMC, ma µ ρ X +ρ ) E 0ρ) 9) on an interval of the form ρ [, ρ Dc ]. Then ln αρ, +ρ, X) E 0ρ) ρ [, ρ Dc ] roof: In order to see this, first recall that for the ) that maimizes E 0 ρ, ), i.e. satisfying we have, ma E 0ρ, ) E 0 ρ, ) E 0 ρ, ) β, +ρ, ρ) supp ) Now the statement simply follows the equation 8) Recall that sphere pacing eponent is given E sp R) ma ρ 0 E 0ρ) ρr Thus for each R there is a corresponding ρ R, which is the maimizer of the the epression above. As a result of Lemma and Corollary2 for the rates with ρ R [, ρ Dc ] sphere pacing eponent will be achievable as error eponent for the channels satisfying condition given in equation 9). Two family of channels that satisfy the condition 9) are Binary input channels and -ary symmetric channels. 2) Binary Input Channel: The binary input channel case have also been addressed by D yachov. We simply rederive his results here. A binary input channel is DMC for which X {0, } For ρ, η 0, using equation 8) we get, αρ, η, X) e ma{β0,η,ρ),β,η,ρ),µ0ρη),µρη)} 20) For the rates for which ρ R [, ρ Dc ] this will lead to sphere pacing eponent. For the rates for which ρ R > ρ Dc we will simply minimize the epression in equation 20) over,η and ρ to find the best possible error eponent achievable with this scheme. For the rates such that ρ R 0, ) one can also show that sphere pacing eponent is achievable. For that one needs to consider, f m X y X y W y χ m) ) m W y χ ) η q W y χ m) ) z mw y χ ) η + z m)w y χ ) η ) ρ It can be shown that, for ρ < and η <, z m f m 0. Furthermore as a result of equation 6) we now that z m χ m ). Thus for ρ <, η <, βχm,ρ,η) f m e αρ, η, X) e ma{β0,η,ρ),β,η,ρ)} If we consider that is the maimizer of E 0 ρ, ), we get ) ρ ln αρ, +ρ, X) E 0ρ) Recalling the definition of ρ R and E sp R) we conclude that sphere pacing eponent is achievable for R such that ρ R in0, ). 3) -ary Symmetric Channel: Let us consider -ary symmetric channel with <, i.e. { i j wi j) i j Note that for any ρ > 0, η > 0 and we have µρη) µ ρη) ) ) ) ) ρη + ) ) ) ρη + 2) Furthermore note that for any ρ > 0, η > 0, X and for ) / βρ, η) β, ρ, η) [ ) + ) ) ] )η + )η ) ρ Thus as a result of equation 8), for ρ αρ, η, X) ma{βρ, η), µρη)} 2) For 2 case these epressions are equivalent to those of Zigangirov in [8], which were specifically derived for BSCs. For 3, these epressions lead to αρ, η) s that are strictly

5 better than the corresponding quantities in [4] for all ρ >. Figure?? shows the resulting error eponents for a particular channel. In [4], equivalent of equation 2), has β D ρ, η) instead of βρ, η) where, β D ρ, η) ) )η + )η ) ρ + ) ) )ηρ + )ηρ ) Since the function z ρ is strictly conve in z for ρ >, βρ, η) > β D ρ, η) for all ρ > and η > 0. 4) Remars on Z-D Encoding Scheme: The two eample channels we have considered does not reveal the main weaness of the Zigangirov-D yachov encoding scheme. Consider the following four-by-four symmetric channel wy ), W y ) The error eponent of this channel is equal to sphere pacing eponent at all rates. In order to see why, consider any rate R > 0. We already now that R > 0 there is a fied list size L R such that: for any n there is a length n code with decoding list size L R whose error probability satisfies e e EspR)n. Using log 2 L R etra channel uses transmitter can specify the correct message among the decoded list, if the correct message is in the list. Since this etra time will become negligible as n increases we can conclude that sphere pacing eponent is achievable for this channel, when there is feedbac. However when we consider the Zigangirov-D yachov encoding scheme we can not reach the same conclusion. Clearly we should choose the ) to be the uniform distribution. Consider for eample the q, such that q [ ] any smart encoding scheme would assign the first two messages into input letters that are not consecutive. But Zigangirov- D yachov encoding scheme will not necessarily do so. Considering similar q s for low enough ρ s one can show that Z-D encoding scheme leads to an αρ, +ρ ) such that ln αρ, +ρ ) > E 0ρ). As a result this will imply that Z-D encoding scheme will have an error eponent strictly less than the sphere pacing eponent. On the other hand for all such anomalies we observed, we were also able to find a modification of Z-D encoding scheme, which employs a smarter encoding scheme for the first X messages, and which performs optimally. However we have yet to find a general modification on Z-D encoding scheme that wors for all q Q. analysis does help us to see how to generalize the Zigangirov- D yachov encoding scheme to the general DMCs. On a seperate note Burnashev, [2], considered the binary symmetric channels and showed that for all the rates between 0 and R Zcrit one can reach and an error eponent strictly higher than the ones given in [8]. Indeed a similar modification are possible within our frame wor too. We will present those in our journal paper on this subject. REFERENCES [] E. R. Berleamp. Bloc Coding with Noiseless Feedbac. h.d. thesis, Massachusetts Institute of Technology, Department of Electrical Engineering, 964. [2] M. V. Burnashev. On the reliability function of a binary symmetrical channel with feedbac. roblemy erdachi Informatsii, 24, No. :3 0, 988. [3] R. L. Dobrushin. An asymptotic bound for the probability error of information transmission through a channel without memory using the feedbac. roblemy ibernetii, vol 8:6 68, 962. [4] A. G. D yachov. Upper bounds on the error probability for discrete memoryless channels with feedbac. roblemy erdachi Informatsii, Vol, No. 4:3 28, 975. [5] R.G. Gallager. A simple derivation of the coding theorem and some applications. Information Theory, IEEE Transactions on,, No. :3 8, 965. [6] C. Shannon. The zero error capacity of a noisy channel. IEEE Transactions on Information Theory, Vol. 2, Iss 3:8 9, 956. [7] A.Yu. Sheverdyaev. Lower bound for error probability in a discrete memoryless channel with feedbac. roblemy erdachi Informatsii, 8, No. 4:5 5, 982. [8]. Sh. Zigangirov. Upper bounds for the error probability for channels with feedbac. roblemy erdachi Informatsii, Vol 6, No. 2:87 92, 970. VI. CONCLUSIONS AND FUTURE WOR A part from the minor improvement for -ary symmetric channels our results are essentially a rederivations of the results of Zigangirov, [8] and D yachov, [4]. However simplicity of the

4 An Introduction to Channel Coding and Decoding over BSC

4 An Introduction to Channel Coding and Decoding over BSC 4 An Introduction to Channel Coding and Decoding over BSC 4.1. Recall that channel coding introduces, in a controlled manner, some redundancy in the (binary information sequence that can be used at the

More information

Attaining maximal reliability with minimal feedback via joint channel-code and hash-function design

Attaining maximal reliability with minimal feedback via joint channel-code and hash-function design Attaining maimal reliability with minimal feedback via joint channel-code and hash-function design Stark C. Draper, Kannan Ramchandran, Biio Rimoldi, Anant Sahai, and David N. C. Tse Department of EECS,

More information

Lecture 8: Shannon s Noise Models

Lecture 8: Shannon s Noise Models Error Correcting Codes: Combinatorics, Algorithms and Applications (Fall 2007) Lecture 8: Shannon s Noise Models September 14, 2007 Lecturer: Atri Rudra Scribe: Sandipan Kundu& Atri Rudra Till now we have

More information

Upper Bounds on the Capacity of Binary Intermittent Communication

Upper Bounds on the Capacity of Binary Intermittent Communication Upper Bounds on the Capacity of Binary Intermittent Communication Mostafa Khoshnevisan and J. Nicholas Laneman Department of Electrical Engineering University of Notre Dame Notre Dame, Indiana 46556 Email:{mhoshne,

More information

Approaching Blokh-Zyablov Error Exponent with Linear-Time Encodable/Decodable Codes

Approaching Blokh-Zyablov Error Exponent with Linear-Time Encodable/Decodable Codes Approaching Blokh-Zyablov Error Exponent with Linear-Time Encodable/Decodable Codes 1 Zheng Wang, Student Member, IEEE, Jie Luo, Member, IEEE arxiv:0808.3756v1 [cs.it] 27 Aug 2008 Abstract We show that

More information

EE229B - Final Project. Capacity-Approaching Low-Density Parity-Check Codes

EE229B - Final Project. Capacity-Approaching Low-Density Parity-Check Codes EE229B - Final Project Capacity-Approaching Low-Density Parity-Check Codes Pierre Garrigues EECS department, UC Berkeley garrigue@eecs.berkeley.edu May 13, 2005 Abstract The class of low-density parity-check

More information

Guess & Check Codes for Deletions, Insertions, and Synchronization

Guess & Check Codes for Deletions, Insertions, and Synchronization Guess & Chec Codes for Deletions, Insertions, and Synchronization Serge Kas Hanna, Salim El Rouayheb ECE Department, IIT, Chicago sashann@hawiitedu, salim@iitedu Abstract We consider the problem of constructing

More information

Decision-Point Signal to Noise Ratio (SNR)

Decision-Point Signal to Noise Ratio (SNR) Decision-Point Signal to Noise Ratio (SNR) Receiver Decision ^ SNR E E e y z Matched Filter Bound error signal at input to decision device Performance upper-bound on ISI channels Achieved on memoryless

More information

Practical Polar Code Construction Using Generalised Generator Matrices

Practical Polar Code Construction Using Generalised Generator Matrices Practical Polar Code Construction Using Generalised Generator Matrices Berksan Serbetci and Ali E. Pusane Department of Electrical and Electronics Engineering Bogazici University Istanbul, Turkey E-mail:

More information

LECTURE 10. Last time: Lecture outline

LECTURE 10. Last time: Lecture outline LECTURE 10 Joint AEP Coding Theorem Last time: Error Exponents Lecture outline Strong Coding Theorem Reading: Gallager, Chapter 5. Review Joint AEP A ( ɛ n) (X) A ( ɛ n) (Y ) vs. A ( ɛ n) (X, Y ) 2 nh(x)

More information

Codes for Partially Stuck-at Memory Cells

Codes for Partially Stuck-at Memory Cells 1 Codes for Partially Stuck-at Memory Cells Antonia Wachter-Zeh and Eitan Yaakobi Department of Computer Science Technion Israel Institute of Technology, Haifa, Israel Email: {antonia, yaakobi@cs.technion.ac.il

More information

Feedback Capacity of a Class of Symmetric Finite-State Markov Channels

Feedback Capacity of a Class of Symmetric Finite-State Markov Channels Feedback Capacity of a Class of Symmetric Finite-State Markov Channels Nevroz Şen, Fady Alajaji and Serdar Yüksel Department of Mathematics and Statistics Queen s University Kingston, ON K7L 3N6, Canada

More information

(Classical) Information Theory III: Noisy channel coding

(Classical) Information Theory III: Noisy channel coding (Classical) Information Theory III: Noisy channel coding Sibasish Ghosh The Institute of Mathematical Sciences CIT Campus, Taramani, Chennai 600 113, India. p. 1 Abstract What is the best possible way

More information

A multiple access system for disjunctive vector channel

A multiple access system for disjunctive vector channel Thirteenth International Workshop on Algebraic and Combinatorial Coding Theory June 15-21, 2012, Pomorie, Bulgaria pp. 269 274 A multiple access system for disjunctive vector channel Dmitry Osipov, Alexey

More information

An Alternative Proof of Channel Polarization for Channels with Arbitrary Input Alphabets

An Alternative Proof of Channel Polarization for Channels with Arbitrary Input Alphabets An Alternative Proof of Channel Polarization for Channels with Arbitrary Input Alphabets Jing Guo University of Cambridge jg582@cam.ac.uk Jossy Sayir University of Cambridge j.sayir@ieee.org Minghai Qin

More information

Discrete Memoryless Channels with Memoryless Output Sequences

Discrete Memoryless Channels with Memoryless Output Sequences Discrete Memoryless Channels with Memoryless utput Sequences Marcelo S Pinho Department of Electronic Engineering Instituto Tecnologico de Aeronautica Sao Jose dos Campos, SP 12228-900, Brazil Email: mpinho@ieeeorg

More information

Stochastic Interpretation for the Arimoto Algorithm

Stochastic Interpretation for the Arimoto Algorithm Stochastic Interpretation for the Arimoto Algorithm Serge Tridenski EE - Sstems Department Tel Aviv Universit Tel Aviv, Israel Email: sergetr@post.tau.ac.il Ram Zamir EE - Sstems Department Tel Aviv Universit

More information

A Simple Converse of Burnashev s Reliability Function

A Simple Converse of Burnashev s Reliability Function A Simple Converse of Burnashev s Reliability Function 1 arxiv:cs/0610145v3 [cs.it] 23 Sep 2008 Peter Berlin, Barış Nakiboğlu, Bixio Rimoldi, Emre Telatar School of Computer and Communication Sciences Ecole

More information

(each row defines a probability distribution). Given n-strings x X n, y Y n we can use the absence of memory in the channel to compute

(each row defines a probability distribution). Given n-strings x X n, y Y n we can use the absence of memory in the channel to compute ENEE 739C: Advanced Topics in Signal Processing: Coding Theory Instructor: Alexander Barg Lecture 6 (draft; 9/6/03. Error exponents for Discrete Memoryless Channels http://www.enee.umd.edu/ abarg/enee739c/course.html

More information

Capacity Upper Bounds for the Deletion Channel

Capacity Upper Bounds for the Deletion Channel Capacity Upper Bounds for the Deletion Channel Suhas Diggavi, Michael Mitzenmacher, and Henry D. Pfister School of Computer and Communication Sciences, EPFL, Lausanne, Switzerland Email: suhas.diggavi@epfl.ch

More information

Channel Coding 1. Sportturm (SpT), Room: C3165

Channel Coding 1.   Sportturm (SpT), Room: C3165 Channel Coding Dr.-Ing. Dirk Wübben Institute for Telecommunications and High-Frequency Techniques Department of Communications Engineering Room: N3, Phone: 4/8-6385 Sportturm (SpT), Room: C365 wuebben@ant.uni-bremen.de

More information

Polar Write Once Memory Codes

Polar Write Once Memory Codes Polar Write Once Memory Codes David Burshtein, Senior Member, IEEE and Alona Strugatski Abstract arxiv:1207.0782v2 [cs.it] 7 Oct 2012 A coding scheme for write once memory (WOM) using polar codes is presented.

More information

Capacity-achieving Feedback Scheme for Flat Fading Channels with Channel State Information

Capacity-achieving Feedback Scheme for Flat Fading Channels with Channel State Information Capacity-achieving Feedback Scheme for Flat Fading Channels with Channel State Information Jialing Liu liujl@iastate.edu Sekhar Tatikonda sekhar.tatikonda@yale.edu Nicola Elia nelia@iastate.edu Dept. of

More information

Relaying a Fountain code across multiple nodes

Relaying a Fountain code across multiple nodes Relaying a Fountain code across multiple nodes Ramarishna Gummadi, R.S.Sreenivas Coordinated Science Lab University of Illinois at Urbana-Champaign {gummadi2, rsree} @uiuc.edu Abstract Fountain codes are

More information

An Improved Sphere-Packing Bound for Finite-Length Codes over Symmetric Memoryless Channels

An Improved Sphere-Packing Bound for Finite-Length Codes over Symmetric Memoryless Channels An Improved Sphere-Packing Bound for Finite-Length Codes over Symmetric Memoryless Channels Gil Wiechman Igal Sason Department of Electrical Engineering Technion, Haifa 3000, Israel {igillw@tx,sason@ee}.technion.ac.il

More information

for some error exponent E( R) as a function R,

for some error exponent E( R) as a function R, . Capacity-achieving codes via Forney concatenation Shannon s Noisy Channel Theorem assures us the existence of capacity-achieving codes. However, exhaustive search for the code has double-exponential

More information

Information Theory with Applications, Math6397 Lecture Notes from September 30, 2014 taken by Ilknur Telkes

Information Theory with Applications, Math6397 Lecture Notes from September 30, 2014 taken by Ilknur Telkes Information Theory with Applications, Math6397 Lecture Notes from September 3, 24 taken by Ilknur Telkes Last Time Kraft inequality (sep.or) prefix code Shannon Fano code Bound for average code-word length

More information

Training-Based Schemes are Suboptimal for High Rate Asynchronous Communication

Training-Based Schemes are Suboptimal for High Rate Asynchronous Communication Training-Based Schemes are Suboptimal for High Rate Asynchronous Communication The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation

More information

Channel Coding: Zero-error case

Channel Coding: Zero-error case Channel Coding: Zero-error case Information & Communication Sander Bet & Ismani Nieuweboer February 05 Preface We would like to thank Christian Schaffner for guiding us in the right direction with our

More information

to mere bit flips) may affect the transmission.

to mere bit flips) may affect the transmission. 5 VII. QUANTUM INFORMATION THEORY to mere bit flips) may affect the transmission. A. Introduction B. A few bits of classical information theory Information theory has developed over the past five or six

More information

Approximate Message Passing with Built-in Parameter Estimation for Sparse Signal Recovery

Approximate Message Passing with Built-in Parameter Estimation for Sparse Signal Recovery Approimate Message Passing with Built-in Parameter Estimation for Sparse Signal Recovery arxiv:1606.00901v1 [cs.it] Jun 016 Shuai Huang, Trac D. Tran Department of Electrical and Computer Engineering Johns

More information

Successive Cancellation Decoding of Single Parity-Check Product Codes

Successive Cancellation Decoding of Single Parity-Check Product Codes Successive Cancellation Decoding of Single Parity-Check Product Codes Mustafa Cemil Coşkun, Gianluigi Liva, Alexandre Graell i Amat and Michael Lentmaier Institute of Communications and Navigation, German

More information

On Generalized EXIT Charts of LDPC Code Ensembles over Binary-Input Output-Symmetric Memoryless Channels

On Generalized EXIT Charts of LDPC Code Ensembles over Binary-Input Output-Symmetric Memoryless Channels 2012 IEEE International Symposium on Information Theory Proceedings On Generalied EXIT Charts of LDPC Code Ensembles over Binary-Input Output-Symmetric Memoryless Channels H Mamani 1, H Saeedi 1, A Eslami

More information

The error exponent with delay for lossless source coding

The error exponent with delay for lossless source coding The error eponent with delay for lossless source coding Cheng Chang and Anant Sahai Wireless Foundations, University of California at Berkeley cchang@eecs.berkeley.edu, sahai@eecs.berkeley.edu Abstract

More information

Time-invariant LDPC convolutional codes

Time-invariant LDPC convolutional codes Time-invariant LDPC convolutional codes Dimitris Achlioptas, Hamed Hassani, Wei Liu, and Rüdiger Urbanke Department of Computer Science, UC Santa Cruz, USA Email: achlioptas@csucscedu Department of Computer

More information

On the variable-delay reliability function of discrete memoryless channels with access to noisy feedback

On the variable-delay reliability function of discrete memoryless channels with access to noisy feedback ITW2004, San Antonio, Texas, October 24 29, 2004 On the variable-delay reliability function of discrete memoryless channels with access to noisy feedback Anant Sahai and Tunç Şimşek Electrical Engineering

More information

Unequal Error Protection: An Information Theoretic Perspective

Unequal Error Protection: An Information Theoretic Perspective 1 Unequal Error Protection: An Information Theoretic Perspective Shashi Borade Barış Nakiboğlu Lizhong Zheng EECS, Massachusetts Institute of Technology { spb, nakib, lizhong } @mit.edu arxiv:0803.2570v4

More information

Differential Analysis II Lectures 22-24

Differential Analysis II Lectures 22-24 Lecture : 3 April 015 18.156 Differential Analysis II Lectures - Instructor: Larry Guth Trans.: Kevin Sacel.1 Precursor to Schrödinger Equation Suppose we are trying to find u(, t) with u(, 0) = f() satisfying

More information

Arimoto Channel Coding Converse and Rényi Divergence

Arimoto Channel Coding Converse and Rényi Divergence Arimoto Channel Coding Converse and Rényi Divergence Yury Polyanskiy and Sergio Verdú Abstract Arimoto proved a non-asymptotic upper bound on the probability of successful decoding achievable by any code

More information

Introduction to Wireless & Mobile Systems. Chapter 4. Channel Coding and Error Control Cengage Learning Engineering. All Rights Reserved.

Introduction to Wireless & Mobile Systems. Chapter 4. Channel Coding and Error Control Cengage Learning Engineering. All Rights Reserved. Introduction to Wireless & Mobile Systems Chapter 4 Channel Coding and Error Control 1 Outline Introduction Block Codes Cyclic Codes CRC (Cyclic Redundancy Check) Convolutional Codes Interleaving Information

More information

On the Error Exponents of ARQ Channels with Deadlines

On the Error Exponents of ARQ Channels with Deadlines On the Error Exponents of ARQ Channels with Deadlines Praveen Kumar Gopala, Young-Han Nam and Hesham El Gamal arxiv:cs/06006v [cs.it] 8 Oct 2006 March 22, 208 Abstract We consider communication over Automatic

More information

Optimal Natural Encoding Scheme for Discrete Multiplicative Degraded Broadcast Channels

Optimal Natural Encoding Scheme for Discrete Multiplicative Degraded Broadcast Channels Optimal Natural Encoding Scheme for Discrete Multiplicative Degraded Broadcast Channels Bike ie, Student Member, IEEE and Richard D. Wesel, Senior Member, IEEE Abstract Certain degraded broadcast channels

More information

Universal A Posteriori Metrics Game

Universal A Posteriori Metrics Game 1 Universal A Posteriori Metrics Game Emmanuel Abbe LCM, EPFL Lausanne, 1015, Switzerland emmanuel.abbe@epfl.ch Rethnakaran Pulikkoonattu Broadcom Inc, Irvine, CA, USA, 92604 rethna@broadcom.com arxiv:1004.4815v1

More information

Chapter 9 Fundamental Limits in Information Theory

Chapter 9 Fundamental Limits in Information Theory Chapter 9 Fundamental Limits in Information Theory Information Theory is the fundamental theory behind information manipulation, including data compression and data transmission. 9.1 Introduction o For

More information

Lecture 6 I. CHANNEL CODING. X n (m) P Y X

Lecture 6 I. CHANNEL CODING. X n (m) P Y X 6- Introduction to Information Theory Lecture 6 Lecturer: Haim Permuter Scribe: Yoav Eisenberg and Yakov Miron I. CHANNEL CODING We consider the following channel coding problem: m = {,2,..,2 nr} Encoder

More information

The Hallucination Bound for the BSC

The Hallucination Bound for the BSC The Hallucination Bound for the BSC Anant Sahai and Stark Draper Wireless Foundations Department of Electrical Engineering and Computer Sciences University of California at Berkeley ECE Department University

More information

Bounds on Mutual Information for Simple Codes Using Information Combining

Bounds on Mutual Information for Simple Codes Using Information Combining ACCEPTED FOR PUBLICATION IN ANNALS OF TELECOMM., SPECIAL ISSUE 3RD INT. SYMP. TURBO CODES, 003. FINAL VERSION, AUGUST 004. Bounds on Mutual Information for Simple Codes Using Information Combining Ingmar

More information

Correcting Bursty and Localized Deletions Using Guess & Check Codes

Correcting Bursty and Localized Deletions Using Guess & Check Codes Correcting Bursty and Localized Deletions Using Guess & Chec Codes Serge Kas Hanna, Salim El Rouayheb ECE Department, Rutgers University serge..hanna@rutgers.edu, salim.elrouayheb@rutgers.edu Abstract

More information

Introduction to Convolutional Codes, Part 1

Introduction to Convolutional Codes, Part 1 Introduction to Convolutional Codes, Part 1 Frans M.J. Willems, Eindhoven University of Technology September 29, 2009 Elias, Father of Coding Theory Textbook Encoder Encoder Properties Systematic Codes

More information

Constructing Polar Codes Using Iterative Bit-Channel Upgrading. Arash Ghayoori. B.Sc., Isfahan University of Technology, 2011

Constructing Polar Codes Using Iterative Bit-Channel Upgrading. Arash Ghayoori. B.Sc., Isfahan University of Technology, 2011 Constructing Polar Codes Using Iterative Bit-Channel Upgrading by Arash Ghayoori B.Sc., Isfahan University of Technology, 011 A Thesis Submitted in Partial Fulfillment of the Requirements for the Degree

More information

Exact Probability of Erasure and a Decoding Algorithm for Convolutional Codes on the Binary Erasure Channel

Exact Probability of Erasure and a Decoding Algorithm for Convolutional Codes on the Binary Erasure Channel Exact Probability of Erasure and a Decoding Algorithm for Convolutional Codes on the Binary Erasure Channel Brian M. Kurkoski, Paul H. Siegel, and Jack K. Wolf Department of Electrical and Computer Engineering

More information

Convergence analysis for a class of LDPC convolutional codes on the erasure channel

Convergence analysis for a class of LDPC convolutional codes on the erasure channel Convergence analysis for a class of LDPC convolutional codes on the erasure channel Sridharan, Arvind; Lentmaier, Michael; Costello Jr., Daniel J.; Zigangirov, Kamil Published in: [Host publication title

More information

CSCI 2570 Introduction to Nanocomputing

CSCI 2570 Introduction to Nanocomputing CSCI 2570 Introduction to Nanocomputing Information Theory John E Savage What is Information Theory Introduced by Claude Shannon. See Wikipedia Two foci: a) data compression and b) reliable communication

More information

EE-597 Notes Quantization

EE-597 Notes Quantization EE-597 Notes Quantization Phil Schniter June, 4 Quantization Given a continuous-time and continuous-amplitude signal (t, processing and storage by modern digital hardware requires discretization in both

More information

Bregman Divergence and Mirror Descent

Bregman Divergence and Mirror Descent Bregman Divergence and Mirror Descent Bregman Divergence Motivation Generalize squared Euclidean distance to a class of distances that all share similar properties Lots of applications in machine learning,

More information

UNIT I INFORMATION THEORY. I k log 2

UNIT I INFORMATION THEORY. I k log 2 UNIT I INFORMATION THEORY Claude Shannon 1916-2001 Creator of Information Theory, lays the foundation for implementing logic in digital circuits as part of his Masters Thesis! (1939) and published a paper

More information

Reliable Computation over Multiple-Access Channels

Reliable Computation over Multiple-Access Channels Reliable Computation over Multiple-Access Channels Bobak Nazer and Michael Gastpar Dept. of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley, CA, 94720-1770 {bobak,

More information

Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes

Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes Igal Sason Department of Electrical Engineering Technion - Israel Institute of Technology Haifa 32000, Israel 2009 IEEE International

More information

Lecture 5: Channel Capacity. Copyright G. Caire (Sample Lectures) 122

Lecture 5: Channel Capacity. Copyright G. Caire (Sample Lectures) 122 Lecture 5: Channel Capacity Copyright G. Caire (Sample Lectures) 122 M Definitions and Problem Setup 2 X n Y n Encoder p(y x) Decoder ˆM Message Channel Estimate Definition 11. Discrete Memoryless Channel

More information

Efficient Decoding of Permutation Codes Obtained from Distance Preserving Maps

Efficient Decoding of Permutation Codes Obtained from Distance Preserving Maps 2012 IEEE International Symposium on Information Theory Proceedings Efficient Decoding of Permutation Codes Obtained from Distance Preserving Maps Yeow Meng Chee and Punarbasu Purkayastha Division of Mathematical

More information

ECE 587 / STA 563: Lecture 5 Lossless Compression

ECE 587 / STA 563: Lecture 5 Lossless Compression ECE 587 / STA 563: Lecture 5 Lossless Compression Information Theory Duke University, Fall 2017 Author: Galen Reeves Last Modified: October 18, 2017 Outline of lecture: 5.1 Introduction to Lossless Source

More information

ECE 587 / STA 563: Lecture 5 Lossless Compression

ECE 587 / STA 563: Lecture 5 Lossless Compression ECE 587 / STA 563: Lecture 5 Lossless Compression Information Theory Duke University, Fall 28 Author: Galen Reeves Last Modified: September 27, 28 Outline of lecture: 5. Introduction to Lossless Source

More information

Transmitting k samples over the Gaussian channel: energy-distortion tradeoff

Transmitting k samples over the Gaussian channel: energy-distortion tradeoff Transmitting samples over the Gaussian channel: energy-distortion tradeoff Victoria Kostina Yury Polyansiy Sergio Verdú California Institute of Technology Massachusetts Institute of Technology Princeton

More information

An Improved Sphere-Packing Bound for Finite-Length Codes over Symmetric Memoryless Channels

An Improved Sphere-Packing Bound for Finite-Length Codes over Symmetric Memoryless Channels POST-PRIT OF THE IEEE TRAS. O IFORMATIO THEORY, VOL. 54, O. 5, PP. 96 99, MAY 8 An Improved Sphere-Packing Bound for Finite-Length Codes over Symmetric Memoryless Channels Gil Wiechman Igal Sason Department

More information

ELEC546 Review of Information Theory

ELEC546 Review of Information Theory ELEC546 Review of Information Theory Vincent Lau 1/1/004 1 Review of Information Theory Entropy: Measure of uncertainty of a random variable X. The entropy of X, H(X), is given by: If X is a discrete random

More information

Digital Communications III (ECE 154C) Introduction to Coding and Information Theory

Digital Communications III (ECE 154C) Introduction to Coding and Information Theory Digital Communications III (ECE 154C) Introduction to Coding and Information Theory Tara Javidi These lecture notes were originally developed by late Prof. J. K. Wolf. UC San Diego Spring 2014 1 / 8 I

More information

Lecture 4 Noisy Channel Coding

Lecture 4 Noisy Channel Coding Lecture 4 Noisy Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 9, 2015 1 / 56 I-Hsiang Wang IT Lecture 4 The Channel Coding Problem

More information

An introduction to basic information theory. Hampus Wessman

An introduction to basic information theory. Hampus Wessman An introduction to basic information theory Hampus Wessman Abstract We give a short and simple introduction to basic information theory, by stripping away all the non-essentials. Theoretical bounds on

More information

2 Generating Functions

2 Generating Functions 2 Generating Functions In this part of the course, we re going to introduce algebraic methods for counting and proving combinatorial identities. This is often greatly advantageous over the method of finding

More information

Solutions to the 77th William Lowell Putnam Mathematical Competition Saturday, December 3, 2016

Solutions to the 77th William Lowell Putnam Mathematical Competition Saturday, December 3, 2016 Solutions to the 77th William Lowell Putnam Mathematical Competition Saturday December 3 6 Kiran Kedlaya and Lenny Ng A The answer is j 8 First suppose that j satisfies the given condition For p j we have

More information

A Practitioner s Guide to Generalized Linear Models

A Practitioner s Guide to Generalized Linear Models A Practitioners Guide to Generalized Linear Models Background The classical linear models and most of the minimum bias procedures are special cases of generalized linear models (GLMs). GLMs are more technically

More information

RCA Analysis of the Polar Codes and the use of Feedback to aid Polarization at Short Blocklengths

RCA Analysis of the Polar Codes and the use of Feedback to aid Polarization at Short Blocklengths RCA Analysis of the Polar Codes and the use of Feedback to aid Polarization at Short Blocklengths Kasra Vakilinia, Dariush Divsalar*, and Richard D. Wesel Department of Electrical Engineering, University

More information

Channel combining and splitting for cutoff rate improvement

Channel combining and splitting for cutoff rate improvement Channel combining and splitting for cutoff rate improvement Erdal Arıkan Electrical-Electronics Engineering Department Bilkent University, Ankara, 68, Turkey Email: arikan@eebilkentedutr arxiv:cs/5834v

More information

6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011

6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011 6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011 On the Structure of Real-Time Encoding and Decoding Functions in a Multiterminal Communication System Ashutosh Nayyar, Student

More information

Delay, feedback, and the price of ignorance

Delay, feedback, and the price of ignorance Delay, feedback, and the price of ignorance Anant Sahai based in part on joint work with students: Tunc Simsek Cheng Chang Wireless Foundations Department of Electrical Engineering and Computer Sciences

More information

Optimal Encoding Schemes for Several Classes of Discrete Degraded Broadcast Channels

Optimal Encoding Schemes for Several Classes of Discrete Degraded Broadcast Channels Optimal Encoding Schemes for Several Classes of Discrete Degraded Broadcast Channels Bike Xie, Student Member, IEEE and Richard D. Wesel, Senior Member, IEEE Abstract arxiv:0811.4162v4 [cs.it] 8 May 2009

More information

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University Chapter 4 Data Transmission and Channel Capacity Po-Ning Chen, Professor Department of Communications Engineering National Chiao Tung University Hsin Chu, Taiwan 30050, R.O.C. Principle of Data Transmission

More information

A Simple Converse of Burnashev's Reliability Function

A Simple Converse of Burnashev's Reliability Function A Simple Converse of Burnashev's Reliability Function The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published Publisher

More information

Data Compression. Limit of Information Compression. October, Examples of codes 1

Data Compression. Limit of Information Compression. October, Examples of codes 1 Data Compression Limit of Information Compression Radu Trîmbiţaş October, 202 Outline Contents Eamples of codes 2 Kraft Inequality 4 2. Kraft Inequality............................ 4 2.2 Kraft inequality

More information

A Proof of the Converse for the Capacity of Gaussian MIMO Broadcast Channels

A Proof of the Converse for the Capacity of Gaussian MIMO Broadcast Channels A Proof of the Converse for the Capacity of Gaussian MIMO Broadcast Channels Mehdi Mohseni Department of Electrical Engineering Stanford University Stanford, CA 94305, USA Email: mmohseni@stanford.edu

More information

Lecture 4 Channel Coding

Lecture 4 Channel Coding Capacity and the Weak Converse Lecture 4 Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 15, 2014 1 / 16 I-Hsiang Wang NIT Lecture 4 Capacity

More information

Trellis-based Detection Techniques

Trellis-based Detection Techniques Chapter 2 Trellis-based Detection Techniques 2.1 Introduction In this chapter, we provide the reader with a brief introduction to the main detection techniques which will be relevant for the low-density

More information

SHARED INFORMATION. Prakash Narayan with. Imre Csiszár, Sirin Nitinawarat, Himanshu Tyagi, Shun Watanabe

SHARED INFORMATION. Prakash Narayan with. Imre Csiszár, Sirin Nitinawarat, Himanshu Tyagi, Shun Watanabe SHARED INFORMATION Prakash Narayan with Imre Csiszár, Sirin Nitinawarat, Himanshu Tyagi, Shun Watanabe 2/40 Acknowledgement Praneeth Boda Himanshu Tyagi Shun Watanabe 3/40 Outline Two-terminal model: Mutual

More information

Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels

Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels Jilei Hou, Paul H. Siegel and Laurence B. Milstein Department of Electrical and Computer Engineering

More information

Research Collection. The relation between block length and reliability for a cascade of AWGN links. Conference Paper. ETH Library

Research Collection. The relation between block length and reliability for a cascade of AWGN links. Conference Paper. ETH Library Research Collection Conference Paper The relation between block length and reliability for a cascade of AWG links Authors: Subramanian, Ramanan Publication Date: 0 Permanent Link: https://doiorg/0399/ethz-a-00705046

More information

Investigation of the Elias Product Code Construction for the Binary Erasure Channel

Investigation of the Elias Product Code Construction for the Binary Erasure Channel Investigation of the Elias Product Code Construction for the Binary Erasure Channel by D. P. Varodayan A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF BACHELOR OF APPLIED

More information

Exponential Error Bounds for Block Concatenated Codes with Tail Biting Trellis Inner Codes

Exponential Error Bounds for Block Concatenated Codes with Tail Biting Trellis Inner Codes International Symposium on Information Theory and its Applications, ISITA004 Parma, Italy, October 10 13, 004 Exponential Error Bounds for Block Concatenated Codes with Tail Biting Trellis Inner Codes

More information

Sparsity. The implication is that we would like to find ways to increase efficiency of LU decomposition.

Sparsity. The implication is that we would like to find ways to increase efficiency of LU decomposition. Sparsity. Introduction We saw in previous notes that the very common problem, to solve for the n vector in A b ( when n is very large, is done without inverting the n n matri A, using LU decomposition.

More information

Asymptotic Expansion and Error Exponent of Two-Phase Variable-Length Coding with Feedback for Discrete Memoryless Channels

Asymptotic Expansion and Error Exponent of Two-Phase Variable-Length Coding with Feedback for Discrete Memoryless Channels 1 Asymptotic Expansion and Error Exponent of Two-Phase Variable-Length oding with Feedback for Discrete Memoryless hannels Tsung-Yi hen, Adam R. Williamson and Richard D. Wesel tsungyi.chen@northwestern.edu,

More information

X 1 : X Table 1: Y = X X 2

X 1 : X Table 1: Y = X X 2 ECE 534: Elements of Information Theory, Fall 200 Homework 3 Solutions (ALL DUE to Kenneth S. Palacio Baus) December, 200. Problem 5.20. Multiple access (a) Find the capacity region for the multiple-access

More information

Probabilistic Graphical Models

Probabilistic Graphical Models Probabilistic Graphical Models Lecture Notes Fall 2009 November, 2009 Byoung-Ta Zhang School of Computer Science and Engineering & Cognitive Science, Brain Science, and Bioinformatics Seoul National University

More information

MATH Examination for the Module MATH-3152 (May 2009) Coding Theory. Time allowed: 2 hours. S = q

MATH Examination for the Module MATH-3152 (May 2009) Coding Theory. Time allowed: 2 hours. S = q MATH-315201 This question paper consists of 6 printed pages, each of which is identified by the reference MATH-3152 Only approved basic scientific calculators may be used. c UNIVERSITY OF LEEDS Examination

More information

Lecture 3: Channel Capacity

Lecture 3: Channel Capacity Lecture 3: Channel Capacity 1 Definitions Channel capacity is a measure of maximum information per channel usage one can get through a channel. This one of the fundamental concepts in information theory.

More information

Coding theory: Applications

Coding theory: Applications INF 244 a) Textbook: Lin and Costello b) Lectures (Tu+Th 12.15-14) covering roughly Chapters 1,9-12, and 14-18 c) Weekly exercises: For your convenience d) Mandatory problem: Programming project (counts

More information

Concatenated Coding Using Linear Schemes for Gaussian Broadcast Channels with Noisy Channel Output Feedback

Concatenated Coding Using Linear Schemes for Gaussian Broadcast Channels with Noisy Channel Output Feedback IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. XX, NO. Y, MONTH 204 Concatenated Coding Using Linear Schemes for Gaussian Broadcast Channels with Noisy Channel Output Feedback Ziad Ahmad, Student Member, IEEE,

More information

AN ON-LINE ADAPTATION ALGORITHM FOR ADAPTIVE SYSTEM TRAINING WITH MINIMUM ERROR ENTROPY: STOCHASTIC INFORMATION GRADIENT

AN ON-LINE ADAPTATION ALGORITHM FOR ADAPTIVE SYSTEM TRAINING WITH MINIMUM ERROR ENTROPY: STOCHASTIC INFORMATION GRADIENT A O-LIE ADAPTATIO ALGORITHM FOR ADAPTIVE SYSTEM TRAIIG WITH MIIMUM ERROR ETROPY: STOHASTI IFORMATIO GRADIET Deniz Erdogmus, José. Príncipe omputational euroengineering Laboratory Department of Electrical

More information

Analysis and Design of Finite Alphabet. Iterative Decoders Robust to Faulty Hardware

Analysis and Design of Finite Alphabet. Iterative Decoders Robust to Faulty Hardware Analysis and Design of Finite Alphabet 1 Iterative Decoders Robust to Faulty Hardware Elsa Dupraz, David Declercq, Bane Vasić and Valentin Savin arxiv:1410.2291v1 [cs.it] 8 Oct 2014 Abstract This paper

More information

Can Feedback Increase the Capacity of the Energy Harvesting Channel?

Can Feedback Increase the Capacity of the Energy Harvesting Channel? Can Feedback Increase the Capacity of the Energy Harvesting Channel? Dor Shaviv EE Dept., Stanford University shaviv@stanford.edu Ayfer Özgür EE Dept., Stanford University aozgur@stanford.edu Haim Permuter

More information

2012 IEEE International Symposium on Information Theory Proceedings

2012 IEEE International Symposium on Information Theory Proceedings Decoding of Cyclic Codes over Symbol-Pair Read Channels Eitan Yaakobi, Jehoshua Bruck, and Paul H Siegel Electrical Engineering Department, California Institute of Technology, Pasadena, CA 9115, USA Electrical

More information

On Third-Order Asymptotics for DMCs

On Third-Order Asymptotics for DMCs On Third-Order Asymptotics for DMCs Vincent Y. F. Tan Institute for Infocomm Research (I R) National University of Singapore (NUS) January 0, 013 Vincent Tan (I R and NUS) Third-Order Asymptotics for DMCs

More information