Exploiting Source Redundancy to Improve the Rate of Polar Codes
|
|
- Ronald Foster
- 6 years ago
- Views:
Transcription
1 Exploiting Source Redundancy to Improe the Rate of Polar Codes Ying Wang ECE Department Texas A&M Uniersity Krishna R. Narayanan ECE Department Texas A&M Uniersity Aiao (Andre) Jiang CSE Department Texas A&M Uniersity Abstract We consider a joint source-channel decoding (JSCD) problem here the source encoder leaes residual redundancy in the source. We first model the redundancy in the source encoder output as the output of a side information channel at the channel decoder, and sho that this improes random error exponent. Then, e consider the use of polar codes in this frameork hen the source redundancy is modeled using a sequence of t-erasure correcting block codes. For this model, the rate of polar codes can be improed by unfreezing some of originally frozen bits and that the improement in rate depends on the distribution of frozen bits ithin a codeord. We present a proof for the conergence of that distribution, as ell as the conergence of the maximum rate improement. The significant performance improement and improed rate proide strong eidences that polar code is a good candidate to exploit the benefit of source redundancy in the JSCD scheme. I. INTRODUCTION We study ho to improe the performance of channel codes using the natural redundancy in data. By natural redundancy, e refer to the inherent redundancy in data (e.g., features in languages and images) that is not artificially added for error correction. We focus on compressed languages here. Current orks hae shon that after compression of texts (at a compression ratio higher than practical systems), lots of natural redundancy still exists [1]. For example, after English texts are compressed by an LZW code ith a dictionary of 2 20 patterns (much larger than LZW dictionaries in many practical systems) and transmitted oer a binary erasure channel (BEC), a decoding algorithm using only natural redundancy can reduce the noise in the compressed texts by oer 85% for channel erasure rates from 5% to 30%. The natural redundancy can significantly improe the error correction capabilities of ECCs [2] [7]. The topic is an extension of joint source-channel coding and denoising, hich hae been studied extensiely. Polar codes hae gained lots of attention due to their capacity achieing property, lo encoding/decoding complexity and good error floor performance [8]. In [4], e hae proposed a joint source-channel coding scheme here the text source is compressed by a Huffman code, and then transmitted through a noisy channel after encoded by a polar code. We hae shon that the proposed joint decoder, hich combines a language decoder ith the polar-code decoder, proides substantial improement in performance oer the This ork as supported in part by the National Science Foundation under grants ECCS CRC-aided list decoding, hile the decoding complexity is kept in the same order as the list decoding scheme. The insight of using polar codes is that the language decoder and polar decoder both ork oer trees and that they can be combined in a computationally efficient ay. Another insight comes from the sequential nature of polar decoding. In general, successie cancellation (SC) decoding of polar codes suffers from error propagation. Wrongly decoded bits in early stages may seerely degrade the decoding performance of the hole sequence. To alleiate this, e can reorder the ords before encoding and put more reliable sub-sequences in the front and suppress the error propagation. By cleerly arranging the order of information bits based on the reliability/recoer ability, e are able to largely improe the oerall performance. This makes polar codes more naturally suited for exploiting source redundancy than LDPC type codes. We hae shon a significant improement in the performance of joint decoding of polar codes in [4]. The improement in the finite length performance inspires us to explore ho much rate of channel codes can be improed asymptotically by source redundancy. We first gie a general model of decoding ith side information, and sho that the source redundancy can help improe the error exponent and rate of channel codes. Then e analyze in particular the rate improement of polar codes. A theoretical model is studied for natural redundancy in compressed languages. That is, e model the source redundancy as a set of t erasure correcting block codes concatenated to the information bits. The block code is a simple and effectie ay to model the erasure correcting ability of ords in the dictionary. Each block code corresponds to a ord or a longer text. We sho that the rate of polar codes can be improed by the source redundancy. The improement in the rate depends on the distribution of frozen bits ithin a codeord. We obtained a loer bound on the improed rate of polar codes in [4], assuming the limit distribution exists. In this ork, e formally proe that the distribution of frozen bits conerges to a limit distribution. We propose an optimal information-bit allocation algorithm and analyze the maximum improement in the rate of polar codes ith the proposed algorithm. II. BACKGROUND In this section e reie polar codes, and the joint decoding scheme of polar codes ith source redundancy proposed in [4] /17/$ IEEE 864
2 A. Polar codes Let n = 2 m be the length of polar codes. Polar codes are recursiely constructed ith the generator matrix G n = R n G m 2 [, here ] R n is an n n bit-reersal permutation matrix, 1 0 G 2 =, and is the Kronecker product. Let u 1 1 n 1 be the bits to be encoded, x n 1 be the coded bits and y1 n be the receied bits. Also, let W (y x) be the channel la. The Arikan s polar codes consist of to parts, namely channel combining and channel splitting. For channel combining, n copies of the channel are combined to create the channel n W n (y1 n u n 1 ) W n (y1 n u n 1 G n )= W (y i x i ) due to the memoryless property of the channel. The channel splitting process splits W n back into a set of n bit channels W m (i) (y1 n,u i 1 1 u i ) 1 2 n 1 W n (y1 n u n 1 ), i =1,,n. u n i+1 Let I(W ) be the capacity of a discrete memoryless channel (DMC) defined by W. The channels W m (i) ill polarize in the sense that the fraction of bit channels for hich I(W m (i) ) 1 ill conerge to I(W ), and the remaining ill hae I(W m (i) ) 0 as n. The construction of Arikan s polar codes is to find a set of best channels F c and transmit information only on those channels. B. Joint decoding of polar codes for language-based sources Motiated by the nice properties of polar codes, e study the potential of polar codes in the joint decoding scheme. An adantage of polar codes in the joint scheme is that list decoding of polar codes is based on the tree structure, hich can be naturally combined ith the tree structure of the dictionary. A joint decoding frameork is gien in [4]. The decoder employs joint list decoding to take adantage of a-priori information proided by the dictionary. Simulation results sho that the scheme significantly outperforms list decoding of CRC-aided polar codes. III. GENERAL DECODING MODEL WITH SOURCE REDUNDANCY In a system here the source is compressed ithout remoing all the redundancy, the leftoer redundancy can act as the side information to help improe the decoding performance, hile no change is made to the encoding structure. In our ork, the source redundancy is modeled as the side information as shon in Fig. 1. There are to parallel channels, one transmitting the normal codeords, and the other transmitting the side information. The source is first source encoded into u and channel encoded into x, and then is transmitted through a channel to the decoder. The decoder receies y and has side information aailable. is correlated ith the source and the correlation depends on the redundancy left in the source. Lemma 1. Assume the source is encoded by source and channel codes and transmitted through a DMC ith transition W Source Encoder U Encoder Side Info X V Fig. 1: A joint decoding model Y Decoder probability P (y x). Each codeord is chosen i.i.d from distribution Q. Denote n as the length of the code. Assume the side information is aailable at the decoder ith prior probability p(). The source is transmitted through a channel ith transition probability p( ). The aerage error probability of the ensemble is bounded by P e 2 ne0(ρ,q)+es(ρ),ρ (0, 1], here E 0 (ρ, Q) and E s (ρ) are defined as E 0 (ρ, Q) = log ( ) 1+ρ Q(k)P (j k) 1/(1+ρ), j k E s (ρ) =log ( ) 1+ρ p() p( ) 1/(1+ρ). Proof. Let P e, denote the aerage error probability gien side information. From Exercise 5.16 of [9] e kno that if the prior probability of the source is aailable at the decoder, ith MAP decoding the aerage error probability of the ensemble gien is bounded by ( ) 1+ρ P e, p( ) 1/(1+ρ) 2 ne0(ρ,q). (1) The aerage error probability is deried as P e = p() P e, 2 ( ) 1+ρ ne0(ρ,q) p() p( ) 1/(1+ρ). Then E s (ρ) can be deried from aboe. Example 1. Assume that the source is a binary memoryless source (BMS) ith length l, and the channel code has length n. Each symbol in the source is i.i.d. Assume that the channel to transmit the side information is a BEC ith erasure probability β. Then, E s (ρ) can be computed as E s (ρ) =l log ( ) 1+ρ p() Q( ) 1/(1+ρ) = l log ((1 β)+β2 ρ ) Without loss of generality assume R = l n,ehae E s (ρ) =Rn log ((1 β)+β2 ρ ). (2) 865
3 no side info =0.9 =0.8 =0.7 =0.6 = R Fig. 2: Random error exponent for BEC ith side information. Assume the channel is BEC(ɛ) and the channel for side information is BEC(β). E 0 (ρ) is deried as E 0 (ρ) = log ((1 ɛ)2 ρ + ɛ), and E s (ρ) is gien in (2). Let E r (R) = max Q E 0 (ρ, Q) E s (ρ)/n denote the random exponent. We plot E r (R) in Fig. 2. It can be obsered that the random exponent ith β [0, 1) is larger than that ithout side information. The maximum rate for hich E r (R) > 0 is also improed by the side information. The preceeding discussion shos that for the random code ensemble, the rate can be improed by side information aailable at decoder. In the folloing sections, e consider a practical joint decoding scheme ith polar codes. We study a particular model for the side information and analyze the improed rate of polar codes for that model. IV. THEORETICAL MODEL FOR NATURAL REDUNDANCY IN LANGUAGES Let [n 0,t] denote a block code of length n 0 that can correct t erasures. The natural redundancy in the source is modeled such that each block of n 0 consecutie output bits from the side information channel corresponds to a ord in the dictionary or a longer text and is assumed to be a codeord of a [n 0,t] outer code. Let B =(b 0,b 1,b 2 ) be the bits in a compressed text. We assume that B is a sequence of [n 0,t] codes. The model is suitable for many compression algorithms, such as Huffman coding and LZW coding ith a fixed dictionary of patterns, here eery binary codeord in the compressed text represents a sequence of characters in the original text, and the natural redundancy in multiple adjacent codeords (e.g., the sparsity of alid ords, phrases or grammatically correct sentences) can be used to correct erasures. Theoretically, the greater n 0 is, the better this model is for languages. V. BIT-ALLOCATION ALGORITHM FOR OPTIMAL RATE IMPROVEMENT OF POLAR CODES The natural redundancy in compressed texts can be used to improe the rate of a polar code. Let C be an (n, k) polar code. Its encoder normally encodes n input bits U = (u 0,u 1,,u n 1 ) k of hich are information bits, and n k of hich are frozen bits into a polar codeord X =(x 0,x 1,,x n 1 ). Let F {0, 1,,n 1} denote the indices of the frozen bits. Here, ith natural redundancy in information bits, e can improe the code rate by unfreezing some of the frozen bits, namely, to use some frozen bits to also store information bits. Let E Fdenote the indices of the bits e unfreeze. We use the bits {u i i {0, 1,,n 1} (F E)} to store the bits in B, and use the sc decoding for error correction. Since B is a sequence of [n 0,t] codes, to make decoding successful, the requirement is that for eery [n 0,t] code, its first n 0 t bits should be stored in bits (of U) hose indices are not in F. The code-rate improement is ΔR = E /n. We hae deried a loer bound to ΔR in [4]. Here e present a ne algorithm for mapping the information bits b 0,b 1,b 2 to the input bits of the polar code s encoder, hich maximizes ΔR: (1) As initialization, let α 1 and β 1. (2) For j =0, 1, 2,if(j mod n 0 ) <n 0 t (hich means b j is one of the first n 0 t bits of an [n 0,t] code), let α min{i α<i<n,i/ F}and then assign b j to u α. Otherise (hich means b j is one of the last t bits of an [n 0,t] code), assign b j as follos: 1) If {i i>α,i>β,i F}, let β min{i i> α, i > β, i F}, and assign b j to u β. 2) If {i i>α,i>β,i F}=, let α min{i α< i<n,i/ F}and then assign b j to u α. The aboe algorithm ends hen there is no more alue to assign to α (namely, {i α<i<n,i/ F}= ). E includes all bits denoted by u β. The algorithm maximizes E and therefore ΔR. It is a greedy algorithm, and has linear time complexity. We present a sketch of proof for its optimality: for eery [n 0,t] outer-codeord, call its first n 0 t bits pseudoinfo bits, and call its last t bits pseudo-check bits. Since SC decoding decodes the bits of outer-codeords in the order of their increasing indices, in encoding, it is optimal to allocate the pseudo-info bits to non-frozen bits codeord by codeord, ithout interleaing them. For pseudo-check bits, it is optimal to allocate them to as many frozen bits as possible; and each time e unfreeze a frozen bit to store a pseudo-check bit, it is optimal to choose the feasible frozen bit of the minimum index. Due to space limitation, e skip the details of the proof. VI. CONVERGENCE OF FROZEN-BIT DISTRIBUTION IN POLAR CODES The improement in code rate depends on the distribution of frozen bits in the polar code. (Generally speaking, the further behind the frozen bits positions are in the polar code, the more code-rate improement can be achieed because more frozen bits can be used in the [n 0,t] codes.) We characterize the distribution of frozen bits as follos. Gien a real number y, let y be y 1 if y is an integer, and be y otherise. x [0, 1], let F n (x) be the frozen set (i.e., indices of frozen bits) among the first () +1 bits of the polar code. Formally, gien an arbitrary small ɛ [0, 1), F n (x) {i [0, () ]:I(W (i) m ) [0, 1 ɛ)}. 866
4 Let f n (x) Fn(x) [0, 1], hich describes the distribution of frozen bits (or more specifically, the proportion of frozen bits among the first bits). We sho the conergence of f n (x) in the folloing theorem. Theorem 2. For a polar code ith length n =2 m and rate R = I(W ) δ here δ is an arbitrary small positie number, the function f n (x) conerges as n. Proof. Assume a polar code has length n =2 m and rate R. Let B n (x), M n (x) and A n (x) be defined as follos: for any arbitrary small ɛ [0, 1), B n (x) {i [0, () ]:I(W (i) m ) [0,ɛ]}, M n (x) {i [0, () ]:I(W (i) m ) (ɛ, 1 ɛ)}, A n (x) {i [0, () ]:I(W (i) m ) [1 ɛ, 1]}. We hae F n (x) =B n (x) M n (x). Define g n (x), t n (x) and h n (x) as h n (x) = B n(x),t n(x) = M n(x), and g n (x) = A n(x). From the definition of frozen set, e hae f n (x) =h n (x)+ t n (x) =1 g n (x). Asn, almost all channels polarize in the sense that g n (1) I(W ), h n (1) 1 I(W ) and t n (1) 0. For polar code of length n =2 m,ifedoone more step of transformation, the length of the code is increased to 2n. The ith bit channel W (i) m and W (2i+1) Define Δ I, here I(W (2i) is conerted to W (2i) ) I(W (i) m ) I(W (2i+1) ). = I(W m (i) ) I(W (2i) ). We kno that Δ I = I(W (2i+1) (i) ) I(W m ). By computing Δ I for the to extreme channels (BEC and BSC), upper and loer bounds of Δ I can be deried. The maximum alue of Δ I is achieed for BEC: Δ max I = I(W (i) m ) I 2 (W (i) m ), and the minimum alue is achieed for BSC: Δ min I = H(2p(1 p)) H(p) here I(W m (i) )=1 H(p), and H(p) is the entropy of a BSC ith crossoer probability p. Consider the ith bit channel, if i A n (x), I(W m (i) ) is bounded by 1 ɛ I(W m (i) ) 1. It is easily seen that 2i+1 A 2n (x) as I(W m (i) ) I(W (2i+1) (2i) ). Since I(W ) I(W m (i) ) 2,ehaeI(W (2i) ) (1 ɛ)2. Since (1 ɛ) 2 >ɛ for ɛ< =0.382, ifɛ<0.382, it is guaranteed that I(W (2i) ) > ɛ, hich indicates 2i / B n+1(x). Similarly, if i B n (x), I(W m (i) ) satisfies 0 I(W m (i) ) ɛ. Since I(W (2i+1) (i) ) 2I(W m ) I(W m (i) ) 2,ehaeI(W (2i+1) ) 2ɛ ɛ 2. Since 2ɛ ɛ 2 < 1 ɛ for ɛ< =0.382, if ɛ<0.382, it is guaranteed that I(W (2i+1) ) < 1 ɛ, indicating 2i +1 / A 2n (x). In summary, if ɛ<0.382, the folloing constraints are satisfied: if i A n (x), 2i / B 2n (x), and if i B n (x), 2i +1 / A 2n (x). We can conclude that for n large enough the folloing hold: 1) If i B n (x), 2i B 2n (x), and 2i +1 can be either in B 2n (x) or M 2n (x); 2) If i A n (x), 2i +1 A 2n (x), and 2i can be either in A 2n (x) or M 2n (x); 3) If i M n (x), 2i can be in either M 2n (x) or B 2n (x); 2i +1 can be in either M 2n (x) or A 2n (x). From aboe facts, A 2n (x) can be bounded by Thus g 2n (x) is bounded by A 2n (x) 2 A n (x) + M n (x). g 2n (x) = A 2n(x) 2 A n(x) = g n (x)+ 1 2 t n(x). We hae a loer bound for f 2n (x) + M n(x) 2 f 2n (x) =1 g 2n (x) f n (x) 1 2 t n(x) (3) On the other hand, B 2n (x) is upper bounded by Thus h 2n (x) is bounded by B 2n (x) 2 B n (x) + M n (x). h 2n (x) = B 2n(x) 2 = f n (x) 1 2 t n(x), and e get an upper bound of f 2n (x) h n (x)+ 1 2 t n(x) f 2n (x) =h 2n (x)+t 2n (x) f n (x) 1 2 t n(x)+t 2n (x). (4) Combining (3) and (4), f 2n (x) is bounded by f n (x) 1 2 t n(x) f 2n (x) f n (x) 1 2 t n(x)+t 2n (x). By induction, if taking s steps of transformation for any s>0, f 2s n(x) is bounded by f n (x) 1 s t 2 2 i 1 n(x) f 2s n(x) ( s ) f n (x)+ 1 t 2 2 i 1 n(x) t n (x)+t 2 s n(x) (5) By taking limit of both sides of (5), e get lim f n(x) 1 s n 2 lim t 2 i 1 n(x) lim f 2 n n s n(x) ( s ) lim f n(x)+ 1 n 2 lim t 2 i 1 n(x) n lim n t n(x) + lim n t 2 s n(x) Since almost all bit channels ill polarize in the limit of blocklength, lim n t n (x) =0. Finally it can be deried lim n f 2 s n(x) = lim n f n(x). 867
5 fn(x) n=1024 n=4096 n=32768 n=65536 ΔRmax [2,1] code [7,2] code [8,3] code x Fig. 3: Distributions of frozen bits ith different code lengths. We sho f n (x) for rate 1/2 polar codes ith different code lengths in Fig. 3. It can be seen that the distribution is ery close to each other, hich alidates Theorem 2. VII. RATE IMPROVEMENT In this section, e sho the conergence of the improed rate of polar codes. Theorem 3. Assume a polar code has length n and rate R = I(W ) δ, here δ is an arbitrary small positie number. Let the polar code be outer coded by a set of [n 0,t] block codes, here n 0 and t scales linearly ith n. If the frozen bit distribution f n (x) conerges to some function f(x), the improed rate of polar codes ΔR n conerges as n. Sketch of proof. Consider the optimal bit allocation scheme in Section V. The bits are grouped into [n 0,t] outer codes sequentially, here the first n 0 t bits are information bits and last t bits are hat can be unfrozen. Let i and ny i,i [1,s] be the last information bit and last bit in the ith outer code, respectiely. Gien the conergence of f n (x) and linearity of n 0 and t ith n, e can sho the conergence of x i and y i,i [1,s], and thus the conergence of s. Since ΔR n = st+c n here c<t, the conergence of ΔR n follos. Let ΔR max denote the optimal code-rate improement achieed by the bit allocation algorithm. We illustrate examples of ΔR max for rate 1/2 polar codes of different lengths in Fig. 4, for three different [n 0,t] outer codes: [2, 1], [7, 2] and [8, 3] codes. It can be seen that ΔR max conerges as n increases. The conergence of the code-rate improement (for both the algorithm here and the loer bound result in [4]) depends on the conergence of f n (x). TABLE I: Comparison of improed rates ΔR max ΔR l [2, 1] outer codes [7, 2] outer codes [8, 3] outer codes The code-rate improement by the optimal bit allocation algorithm exceeds the loer bound to ΔR (denoted by ΔR l ) in [4], hich as deried constructiely using a pieceise log 2 (n) Fig. 4: Improed rates ith different outer [n 0,t] codes. (The [2, 1], [7, 2], [8, 3] outer codes can correct 1, 2, 3 erasures.) linear approximation of the distribution function f n (x). A comparison of the to code-rate improements is shon in Table I, for sufficiently large polar-code length n. VIII. CONCLUSION Source redundancy is exploited to improe the decoding of channel codes. We model the source redundancy as side information, and sho the improed random coding exponent and improed rate. Practically, polar codes are explored to do joint decoding ith natural redundancy. We sho the improed rate of polar codes. We gie a particular model for natural redundancy in languages, and propose an optimal informationbit allocation algorithm to achiee the maximum improed rate. A proof of the conergence of frozen bit distribution is gien, and e sho the conergence of maximum improed rate. We can extend this ork to model the source redundancy as block codes ith different lengths and erasure correcting abilities. By reordering the information bits in the ord leel before encoding based on the erasure correcting ability of the outer codes, better performance can be achieed. REFERENCES [1] A. Jiang, P. Upadhyaya, E. Haratsch, and J. Bruck, Error correction by natural redundancy for long term storage, in Proc. NVMW, [2] A. Jiang, Y. Li, and J. Bruck, Error correction through language processing, in Proc. IEEE ITW, 2015, pp [3], Enhanced error correction ia language processing, in Proc. NVMW, [4] Y. Wang, M. Qin, K. R. Narayanan, A. Jiang, and Z. Bandic, Joint source-channel decoding of polar codes for language-based sources, in Proc. IEEE Globecom, [5] Y. Wang, A. Jiang, and K. R. Narayanan, Modeling and analysis of joint decoding of language-based sources ith polar codes, in Proc. NVMW, [6] P. Upadhyaya and A. Jiang, LDPC decoding ith natural redundancy, in Proc. NVMW, [7] Y. Li, Y. Wang, A. Jiang, and J. Bruck, Content-assisted file decoding for nonolatile memories, in Proc. 46th Asilomar Conference on Signals, Systems and Computers, pp , [8] E. Arıkan, polarization: A method for constructing capacityachieing codes for symmetric binary-input memoryless channels, IEEE Trans. Inf. Theory, ol. 55, no. 7, pp , [9] R. G. Gallager, Information Theory and Reliable Communication. Ne York: Wiley,
Optimized Concatenated LDPC Codes for Joint Source-Channel Coding
Optimized Concatenated LDPC Codes for Joint Source-Channel Coding Maria Fresia, Fernando Pérez-Cruz, H. Vincent Poor Department of Electrical Engineering, Princeton Uniersity, Princeton, New Jersey 08544
More informationP(c y) = W n (y c)p(c)
ENEE 739C: Adanced Topics in Signal Processing: Coding Theory Instructor: Alexander Barg Lecture 9 (draft; /7/3). Codes defined on graphs. The problem of attaining capacity. Iteratie decoding. http://www.enee.umd.edu/
More informationOn Shortest Single/Multiple Path Computation Problems in Fiber-Wireless (FiWi) Access Networks
2014 IEEE 15th International Conference on High Performance Sitching and Routing (HPSR) On Shortest Single/Multiple Path Computation Problems in Fiber-Wireless (FiWi) Access Netorks Chenyang Zhou, Anisha
More informationLecture 4 Noisy Channel Coding
Lecture 4 Noisy Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 9, 2015 1 / 56 I-Hsiang Wang IT Lecture 4 The Channel Coding Problem
More informationCh. 2 Math Preliminaries for Lossless Compression. Section 2.4 Coding
Ch. 2 Math Preliminaries for Lossless Compression Section 2.4 Coding Some General Considerations Definition: An Instantaneous Code maps each symbol into a codeord Notation: a i φ (a i ) Ex. : a 0 For Ex.
More information1 :: Mathematical notation
1 :: Mathematical notation x A means x is a member of the set A. A B means the set A is contained in the set B. {a 1,..., a n } means the set hose elements are a 1,..., a n. {x A : P } means the set of
More informationTowards Green Distributed Storage Systems
Towards Green Distributed Storage Systems Abdelrahman M. Ibrahim, Ahmed A. Zewail, and Aylin Yener Wireless Communications and Networking Laboratory (WCAN) Electrical Engineering Department The Pennsylania
More informationNoisy channel communication
Information Theory http://www.inf.ed.ac.uk/teaching/courses/it/ Week 6 Communication channels and Information Some notes on the noisy channel setup: Iain Murray, 2012 School of Informatics, University
More informationPractical Polar Code Construction Using Generalised Generator Matrices
Practical Polar Code Construction Using Generalised Generator Matrices Berksan Serbetci and Ali E. Pusane Department of Electrical and Electronics Engineering Bogazici University Istanbul, Turkey E-mail:
More informationOn Bit Error Rate Performance of Polar Codes in Finite Regime
On Bit Error Rate Performance of Polar Codes in Finite Regime A. Eslami and H. Pishro-Nik Abstract Polar codes have been recently proposed as the first low complexity class of codes that can provably achieve
More informationAverage Coset Weight Distributions of Gallager s LDPC Code Ensemble
1 Average Coset Weight Distributions of Gallager s LDPC Code Ensemble Tadashi Wadayama Abstract In this correspondence, the average coset eight distributions of Gallager s LDPC code ensemble are investigated.
More informationChapter 3. Systems of Linear Equations: Geometry
Chapter 3 Systems of Linear Equations: Geometry Motiation We ant to think about the algebra in linear algebra (systems of equations and their solution sets) in terms of geometry (points, lines, planes,
More informationNotes on Linear Minimum Mean Square Error Estimators
Notes on Linear Minimum Mean Square Error Estimators Ça gatay Candan January, 0 Abstract Some connections between linear minimum mean square error estimators, maximum output SNR filters and the least square
More informationChapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University
Chapter 4 Data Transmission and Channel Capacity Po-Ning Chen, Professor Department of Communications Engineering National Chiao Tung University Hsin Chu, Taiwan 30050, R.O.C. Principle of Data Transmission
More informationReliable Computation over Multiple-Access Channels
Reliable Computation over Multiple-Access Channels Bobak Nazer and Michael Gastpar Dept. of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley, CA, 94720-1770 {bobak,
More informationLecture 9 Polar Coding
Lecture 9 Polar Coding I-Hsiang ang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 29, 2015 1 / 25 I-Hsiang ang IT Lecture 9 In Pursuit of Shannon s Limit Since
More informationChannel combining and splitting for cutoff rate improvement
Channel combining and splitting for cutoff rate improvement Erdal Arıkan Electrical-Electronics Engineering Department Bilkent University, Ankara, 68, Turkey Email: arikan@eebilkentedutr arxiv:cs/5834v
More informationEstimation of Efficiency with the Stochastic Frontier Cost. Function and Heteroscedasticity: A Monte Carlo Study
Estimation of Efficiency ith the Stochastic Frontier Cost Function and Heteroscedasticity: A Monte Carlo Study By Taeyoon Kim Graduate Student Oklahoma State Uniersity Department of Agricultural Economics
More informationSuccessive Cancellation Decoding of Single Parity-Check Product Codes
Successive Cancellation Decoding of Single Parity-Check Product Codes Mustafa Cemil Coşkun, Gianluigi Liva, Alexandre Graell i Amat and Michael Lentmaier Institute of Communications and Navigation, German
More informationPyramidal Sum Labeling In Graphs
ISSN : 48-96, Vol 8, Issue 4, ( Part -I) April 8, pp-9 RESEARCH ARTICLE Pyramidal Sum Labeling In Graphs OPEN ACCESS HVelet Getzimah, D S T Ramesh ( department Of Mathematics, Pope s College, Sayerpuram-68,Ms
More informationAlternative non-linear predictive control under constraints applied to a two-wheeled nonholonomic mobile robot
Alternatie non-linear predictie control under constraints applied to a to-heeled nonholonomic mobile robot Gaspar Fontineli Dantas Júnior *, João Ricardo Taares Gadelha *, Carlor Eduardo Trabuco Dórea
More informationRobustness Analysis of Linear Parameter Varying Systems Using Integral Quadratic Constraints
Robustness Analysis of Linear Parameter Varying Systems Using Integral Quadratic Constraints Harald Pfifer and Peter Seiler 1 Abstract A general approach is presented to analyze the orst case input/output
More informationNon-linear consolidation of soil with vertical and horizontal drainage under time-dependent loading
Uniersity of Wollongong Research Online Faculty of Engineering - Papers (Archie) Faculty of Engineering and Information Sciences Non-linear consolidation of soil ith ertical and horizontal drainage under
More informationAnnouncements Wednesday, September 06
Announcements Wednesday, September 06 WeBWorK due on Wednesday at 11:59pm. The quiz on Friday coers through 1.2 (last eek s material). My office is Skiles 244 and my office hours are Monday, 1 3pm and
More informationVariable-Rate Universal Slepian-Wolf Coding with Feedback
Variable-Rate Universal Slepian-Wolf Coding with Feedback Shriram Sarvotham, Dror Baron, and Richard G. Baraniuk Dept. of Electrical and Computer Engineering Rice University, Houston, TX 77005 Abstract
More informationLecture 8: Shannon s Noise Models
Error Correcting Codes: Combinatorics, Algorithms and Applications (Fall 2007) Lecture 8: Shannon s Noise Models September 14, 2007 Lecturer: Atri Rudra Scribe: Sandipan Kundu& Atri Rudra Till now we have
More informationPolar Codes: Construction and Performance Analysis
Master Project Polar Codes: Construction and Performance Analysis Ramtin Pedarsani ramtin.pedarsani@epfl.ch Supervisor: Prof. Emre Telatar Assistant: Hamed Hassani Information Theory Laboratory (LTHI)
More informationBlock 2: Introduction to Information Theory
Block 2: Introduction to Information Theory Francisco J. Escribano April 26, 2015 Francisco J. Escribano Block 2: Introduction to Information Theory April 26, 2015 1 / 51 Table of contents 1 Motivation
More informationRCA Analysis of the Polar Codes and the use of Feedback to aid Polarization at Short Blocklengths
RCA Analysis of the Polar Codes and the use of Feedback to aid Polarization at Short Blocklengths Kasra Vakilinia, Dariush Divsalar*, and Richard D. Wesel Department of Electrical Engineering, University
More informationCorrecting Errors by Natural Redundancy
Correcting Errors by Natural Redundancy Anxiao Andrew) Jiang CSE Department Texas A&M University College Station, TX 77843 ajiang@cse.tamu.edu Pulakesh Upadhyaya CSE Department Texas A&M University College
More informationNational University of Singapore Department of Electrical & Computer Engineering. Examination for
National University of Singapore Department of Electrical & Computer Engineering Examination for EE5139R Information Theory for Communication Systems (Semester I, 2014/15) November/December 2014 Time Allowed:
More informationUNIVERSITY OF TRENTO ITERATIVE MULTI SCALING-ENHANCED INEXACT NEWTON- METHOD FOR MICROWAVE IMAGING. G. Oliveri, G. Bozza, A. Massa, and M.
UNIVERSITY OF TRENTO DIPARTIMENTO DI INGEGNERIA E SCIENZA DELL INFORMAZIONE 3823 Poo Trento (Italy), Via Sommarie 4 http://www.disi.unitn.it ITERATIVE MULTI SCALING-ENHANCED INEXACT NEWTON- METHOD FOR
More informationThe PPM Poisson Channel: Finite-Length Bounds and Code Design
August 21, 2014 The PPM Poisson Channel: Finite-Length Bounds and Code Design Flavio Zabini DEI - University of Bologna and Institute for Communications and Navigation German Aerospace Center (DLR) Balazs
More informationApproaching Blokh-Zyablov Error Exponent with Linear-Time Encodable/Decodable Codes
Approaching Blokh-Zyablov Error Exponent with Linear-Time Encodable/Decodable Codes 1 Zheng Wang, Student Member, IEEE, Jie Luo, Member, IEEE arxiv:0808.3756v1 [cs.it] 27 Aug 2008 Abstract We show that
More informationChapter 9 Fundamental Limits in Information Theory
Chapter 9 Fundamental Limits in Information Theory Information Theory is the fundamental theory behind information manipulation, including data compression and data transmission. 9.1 Introduction o For
More informationError Correction through Language Processing
Error Correction through Language Processing Anxiao (Andrew Jiang Computer Science and Eng. Dept. Texas A&M University College Station, TX 77843 ajiang@cse.tamu.edu Yue Li Electrical Engineering Department
More informationNon-Surjective Finite Alphabet Iterative Decoders
IEEE ICC 2016 - Communications Theory Non-Surjectie Finite Alphabet Iteratie Decoders Thien Truong Nguyen-Ly, Khoa Le,V.Sain, D. Declercq, F. Ghaffari and O. Boncalo CEA-LETI, MINATEC Campus, Grenoble,
More informationNotes 3: Stochastic channels and noisy coding theorem bound. 1 Model of information communication and noisy channel
Introduction to Coding Theory CMU: Spring 2010 Notes 3: Stochastic channels and noisy coding theorem bound January 2010 Lecturer: Venkatesan Guruswami Scribe: Venkatesan Guruswami We now turn to the basic
More informationLecture 8: Channel and source-channel coding theorems; BEC & linear codes. 1 Intuitive justification for upper bound on channel capacity
5-859: Information Theory and Applications in TCS CMU: Spring 23 Lecture 8: Channel and source-channel coding theorems; BEC & linear codes February 7, 23 Lecturer: Venkatesan Guruswami Scribe: Dan Stahlke
More informationarxiv: v1 [math.co] 25 Apr 2016
On the zone complexity of a ertex Shira Zerbib arxi:604.07268 [math.co] 25 Apr 206 April 26, 206 Abstract Let L be a set of n lines in the real projectie plane in general position. We show that there exists
More informationCSCI 2570 Introduction to Nanocomputing
CSCI 2570 Introduction to Nanocomputing Information Theory John E Savage What is Information Theory Introduced by Claude Shannon. See Wikipedia Two foci: a) data compression and b) reliable communication
More informationWorst Case Analysis of a Saturated Gust Loads Alleviation System
Worst Case Analysis of a Saturated Gust Loads Alleiation System Andreas Knoblach Institute of System Dynamics and Control, German Aerospace Center (DLR), 82234 Weßling, Germany Harald Pfifer and Peter
More informationLECTURE 10. Last time: Lecture outline
LECTURE 10 Joint AEP Coding Theorem Last time: Error Exponents Lecture outline Strong Coding Theorem Reading: Gallager, Chapter 5. Review Joint AEP A ( ɛ n) (X) A ( ɛ n) (Y ) vs. A ( ɛ n) (X, Y ) 2 nh(x)
More informationCoding for Noisy Write-Efficient Memories
Coding for oisy Write-Efficient Memories Qing Li Computer Sci. & Eng. Dept. Texas A & M University College Station, TX 77843 qingli@cse.tamu.edu Anxiao (Andrew) Jiang CSE and ECE Departments Texas A &
More informationMaximum Likelihood Decoding of Codes on the Asymmetric Z-channel
Maximum Likelihood Decoding of Codes on the Asymmetric Z-channel Pål Ellingsen paale@ii.uib.no Susanna Spinsante s.spinsante@univpm.it Angela Barbero angbar@wmatem.eis.uva.es May 31, 2005 Øyvind Ytrehus
More informationLecture 3a: The Origin of Variational Bayes
CSC535: 013 Advanced Machine Learning Lecture 3a: The Origin of Variational Bayes Geoffrey Hinton The origin of variational Bayes In variational Bayes, e approximate the true posterior across parameters
More informationSymmetric Functions and Difference Equations with Asymptotically Period-two Solutions
International Journal of Difference Equations ISSN 0973-532, Volume 4, Number, pp. 43 48 (2009) http://campus.mst.edu/ijde Symmetric Functions Difference Equations with Asymptotically Period-two Solutions
More informationPolar Codes: Graph Representation and Duality
Polar Codes: Graph Representation and Duality arxiv:1312.0372v1 [cs.it] 2 Dec 2013 M. Fossorier ETIS ENSEA/UCP/CNRS UMR-8051 6, avenue du Ponceau, 95014, Cergy Pontoise, France Email: mfossorier@ieee.org
More informationRandomized Smoothing Networks
Randomized Smoothing Netorks Maurice Herlihy Computer Science Dept., Bron University, Providence, RI, USA Srikanta Tirthapura Dept. of Electrical and Computer Engg., Ioa State University, Ames, IA, USA
More informationJoint Coding for Flash Memory Storage
ISIT 8, Toronto, Canada, July 6-11, 8 Joint Coding for Flash Memory Storage Anxiao (Andrew) Jiang Computer Science Department Texas A&M University College Station, TX 77843, U.S.A. ajiang@cs.tamu.edu Abstract
More informationOptimal Joint Detection and Estimation in Linear Models
Optimal Joint Detection and Estimation in Linear Models Jianshu Chen, Yue Zhao, Andrea Goldsmith, and H. Vincent Poor Abstract The problem of optimal joint detection and estimation in linear models with
More informationECE Information theory Final (Fall 2008)
ECE 776 - Information theory Final (Fall 2008) Q.1. (1 point) Consider the following bursty transmission scheme for a Gaussian channel with noise power N and average power constraint P (i.e., 1/n X n i=1
More informationAlgorithms and Data Structures 2014 Exercises and Solutions Week 14
lgorithms and ata tructures 0 xercises and s Week Linear programming. onsider the following linear program. maximize x y 0 x + y 7 x x 0 y 0 x + 3y Plot the feasible region and identify the optimal solution.
More informationA simple one sweep algorithm for optimal APP symbol decoding of linear block codes
A simple one sweep algorithm for optimal APP symbol decoding of linear block codes Johansson, Thomas; Zigangiro, Kamil Published in: IEEE Transactions on Information Theory DOI: 10.1109/18.737541 Published:
More informationTowards Universal Cover Decoding
International Symposium on Information Theory and its Applications, ISITA2008 Auckland, New Zealand, 7-10, December, 2008 Towards Uniersal Coer Decoding Nathan Axig, Deanna Dreher, Katherine Morrison,
More informationAn Alternative Proof of Channel Polarization for Channels with Arbitrary Input Alphabets
An Alternative Proof of Channel Polarization for Channels with Arbitrary Input Alphabets Jing Guo University of Cambridge jg582@cam.ac.uk Jossy Sayir University of Cambridge j.sayir@ieee.org Minghai Qin
More informationA Survey on Binary Message LDPC decoder
A Surey on Binary Message LDPC decoder Emmanuel Boutillon *, Chris Winstead * Uniersité de Bretagne Sud Utah State Uniersity Noember the 4 th, 204 Outline Classifications of BM LDPC decoder State of the
More informationConstructing Polar Codes Using Iterative Bit-Channel Upgrading. Arash Ghayoori. B.Sc., Isfahan University of Technology, 2011
Constructing Polar Codes Using Iterative Bit-Channel Upgrading by Arash Ghayoori B.Sc., Isfahan University of Technology, 011 A Thesis Submitted in Partial Fulfillment of the Requirements for the Degree
More informationLecture J. 10 Counting subgraphs Kirchhoff s Matrix-Tree Theorem.
Lecture J jacques@ucsd.edu Notation: Let [n] [n] := [n] 2. A weighted digraph is a function W : [n] 2 R. An arborescence is, loosely speaking, a digraph which consists in orienting eery edge of a rooted
More informationEntropies & Information Theory
Entropies & Information Theory LECTURE I Nilanjana Datta University of Cambridge,U.K. See lecture notes on: http://www.qi.damtp.cam.ac.uk/node/223 Quantum Information Theory Born out of Classical Information
More informationOptimization Problems in Multiple Subtree Graphs
Optimization Problems in Multiple Subtree Graphs Danny Hermelin Dror Rawitz February 28, 2010 Abstract We study arious optimization problems in t-subtree graphs, the intersection graphs of t- subtrees,
More informationThe peculiarities of the model: - it is allowed to use by attacker only noisy version of SG C w (n), - CO can be known exactly for attacker.
Lecture 6. SG based on noisy channels [4]. CO Detection of SG The peculiarities of the model: - it is alloed to use by attacker only noisy version of SG C (n), - CO can be knon exactly for attacker. Practical
More informationCompound Polar Codes
Compound Polar Codes Hessam Mahdavifar, Mostafa El-Khamy, Jungwon Lee, Inyup Kang Mobile Solutions Lab, Samsung Information Systems America 4921 Directors Place, San Diego, CA 92121 {h.mahdavifar, mostafa.e,
More informationA Structured Construction of Optimal Measurement Matrix for Noiseless Compressed Sensing via Polarization of Analog Transmission
Li and Kang: A Structured Construction of Optimal Measurement Matrix for Noiseless Compressed Sensing 1 A Structured Construction of Optimal Measurement Matrix for Noiseless Compressed Sensing via Polarization
More informationBloom Filters and Locality-Sensitive Hashing
Randomized Algorithms, Summer 2016 Bloom Filters and Locality-Sensitive Hashing Instructor: Thomas Kesselheim and Kurt Mehlhorn 1 Notation Lecture 4 (6 pages) When e talk about the probability of an event,
More informationECEN 655: Advanced Channel Coding
ECEN 655: Advanced Channel Coding Course Introduction Henry D. Pfister Department of Electrical and Computer Engineering Texas A&M University ECEN 655: Advanced Channel Coding 1 / 19 Outline 1 History
More informationGraph-based codes for flash memory
1/28 Graph-based codes for flash memory Discrete Mathematics Seminar September 3, 2013 Katie Haymaker Joint work with Professor Christine Kelley University of Nebraska-Lincoln 2/28 Outline 1 Background
More informationEE229B - Final Project. Capacity-Approaching Low-Density Parity-Check Codes
EE229B - Final Project Capacity-Approaching Low-Density Parity-Check Codes Pierre Garrigues EECS department, UC Berkeley garrigue@eecs.berkeley.edu May 13, 2005 Abstract The class of low-density parity-check
More informationNoisy-Channel Coding
Copyright Cambridge University Press 2003. On-screen viewing permitted. Printing not permitted. http://www.cambridge.org/05264298 Part II Noisy-Channel Coding Copyright Cambridge University Press 2003.
More informationSC-Fano Decoding of Polar Codes
SC-Fano Decoding of Polar Codes Min-Oh Jeong and Song-Nam Hong Ajou University, Suwon, Korea, email: {jmo0802, snhong}@ajou.ac.kr arxiv:1901.06791v1 [eess.sp] 21 Jan 2019 Abstract In this paper, we present
More informationRun-length & Entropy Coding. Redundancy Removal. Sampling. Quantization. Perform inverse operations at the receiver EEE
General e Image Coder Structure Motion Video x(s 1,s 2,t) or x(s 1,s 2 ) Natural Image Sampling A form of data compression; usually lossless, but can be lossy Redundancy Removal Lossless compression: predictive
More informationDynamic Vehicle Routing with Heterogeneous Demands
Dynamic Vehicle Routing with Heterogeneous Demands Stephen L. Smith Marco Paone Francesco Bullo Emilio Frazzoli Abstract In this paper we study a ariation of the Dynamic Traeling Repairperson Problem DTRP
More informationMax-Margin Ratio Machine
JMLR: Workshop and Conference Proceedings 25:1 13, 2012 Asian Conference on Machine Learning Max-Margin Ratio Machine Suicheng Gu and Yuhong Guo Department of Computer and Information Sciences Temple University,
More informationUNIT I INFORMATION THEORY. I k log 2
UNIT I INFORMATION THEORY Claude Shannon 1916-2001 Creator of Information Theory, lays the foundation for implementing logic in digital circuits as part of his Masters Thesis! (1939) and published a paper
More informationEE376A - Information Theory Final, Monday March 14th 2016 Solutions. Please start answering each question on a new page of the answer booklet.
EE376A - Information Theory Final, Monday March 14th 216 Solutions Instructions: You have three hours, 3.3PM - 6.3PM The exam has 4 questions, totaling 12 points. Please start answering each question on
More informationLinear Programming Decoding of Binary Linear Codes for Symbol-Pair Read Channels
1 Linear Programming Decoding of Binary Linear Codes for Symbol-Pair Read Channels Shunsuke Horii, Toshiyasu Matsushima, and Shigeichi Hirasawa arxiv:1508.01640v2 [cs.it] 29 Sep 2015 Abstract In this paper,
More informationlossless, optimal compressor
6. Variable-length Lossless Compression The principal engineering goal of compression is to represent a given sequence a, a 2,..., a n produced by a source as a sequence of bits of minimal possible length.
More informationECE Information theory Final
ECE 776 - Information theory Final Q1 (1 point) We would like to compress a Gaussian source with zero mean and variance 1 We consider two strategies In the first, we quantize with a step size so that the
More informationInformation and Entropy
Information and Entropy Shannon s Separation Principle Source Coding Principles Entropy Variable Length Codes Huffman Codes Joint Sources Arithmetic Codes Adaptive Codes Thomas Wiegand: Digital Image Communication
More informationLecture 6 I. CHANNEL CODING. X n (m) P Y X
6- Introduction to Information Theory Lecture 6 Lecturer: Haim Permuter Scribe: Yoav Eisenberg and Yakov Miron I. CHANNEL CODING We consider the following channel coding problem: m = {,2,..,2 nr} Encoder
More informationLecture 11: Polar codes construction
15-859: Information Theory and Applications in TCS CMU: Spring 2013 Lecturer: Venkatesan Guruswami Lecture 11: Polar codes construction February 26, 2013 Scribe: Dan Stahlke 1 Polar codes: recap of last
More informationX 1 : X Table 1: Y = X X 2
ECE 534: Elements of Information Theory, Fall 200 Homework 3 Solutions (ALL DUE to Kenneth S. Palacio Baus) December, 200. Problem 5.20. Multiple access (a) Find the capacity region for the multiple-access
More informationThe Poisson Channel with Side Information
The Poisson Channel with Side Information Shraga Bross School of Enginerring Bar-Ilan University, Israel brosss@macs.biu.ac.il Amos Lapidoth Ligong Wang Signal and Information Processing Laboratory ETH
More information(each row defines a probability distribution). Given n-strings x X n, y Y n we can use the absence of memory in the channel to compute
ENEE 739C: Advanced Topics in Signal Processing: Coding Theory Instructor: Alexander Barg Lecture 6 (draft; 9/6/03. Error exponents for Discrete Memoryless Channels http://www.enee.umd.edu/ abarg/enee739c/course.html
More informationA matrix Method for Interval Hermite Curve Segmentation O. Ismail, Senior Member, IEEE
International Journal of Video&Image Processing Network Security IJVIPNS-IJENS Vol:15 No:03 7 A matrix Method for Interal Hermite Cure Segmentation O. Ismail, Senior Member, IEEE Abstract Since the use
More informationPOLAR CODES FOR ERROR CORRECTION: ANALYSIS AND DECODING ALGORITHMS
ALMA MATER STUDIORUM UNIVERSITÀ DI BOLOGNA CAMPUS DI CESENA SCUOLA DI INGEGNERIA E ARCHITETTURA CORSO DI LAUREA MAGISTRALE IN INGEGNERIA ELETTRONICA E TELECOMUNICAZIONI PER L ENERGIA POLAR CODES FOR ERROR
More informationLecture 4 : Introduction to Low-density Parity-check Codes
Lecture 4 : Introduction to Low-density Parity-check Codes LDPC codes are a class of linear block codes with implementable decoders, which provide near-capacity performance. History: 1. LDPC codes were
More informationChannel Polarization and Blackwell Measures
Channel Polarization Blackwell Measures Maxim Raginsky Abstract The Blackwell measure of a binary-input channel (BIC is the distribution of the posterior probability of 0 under the uniform input distribution
More informationPerformance of Polar Codes for Channel and Source Coding
Performance of Polar Codes for Channel and Source Coding Nadine Hussami AUB, Lebanon, Email: njh03@aub.edu.lb Satish Babu Korada and üdiger Urbanke EPFL, Switzerland, Email: {satish.korada,ruediger.urbanke}@epfl.ch
More informationBivariate Uniqueness in the Logistic Recursive Distributional Equation
Bivariate Uniqueness in the Logistic Recursive Distributional Equation Antar Bandyopadhyay Technical Report # 629 University of California Department of Statistics 367 Evans Hall # 3860 Berkeley CA 94720-3860
More information5. Density evolution. Density evolution 5-1
5. Density evolution Density evolution 5-1 Probabilistic analysis of message passing algorithms variable nodes factor nodes x1 a x i x2 a(x i ; x j ; x k ) x3 b x4 consider factor graph model G = (V ;
More informationLecture 1. 1 Overview. 2 Maximum Flow. COMPSCI 532: Design and Analysis of Algorithms August 26, 2015
COMPSCI 532: Design and Analysis of Algorithms August 26, 205 Lecture Lecturer: Debmalya Panigrahi Scribe: Allen Xiao Oeriew In this lecture, we will define the maximum flow problem and discuss two algorithms
More informationThe Study of Inverted V-Type Heat Pipe Cooling Mechanism Based on High Heat Flux Heating Elements
th IHPS, Taipei, Taian, No. 6-9, The Study of Inerted V-Type Heat Pipe Cooling Mechanism Based on High Heat Flux Heating Elements Li Jing, Ji Shengtao, Yao Zemin The Key Lab of Enhanced Heat Transfer and
More informationAn example of Lagrangian for a non-holonomic system
Uniersit of North Georgia Nighthaks Open Institutional Repositor Facult Publications Department of Mathematics 9-9-05 An eample of Lagrangian for a non-holonomic sstem Piotr W. Hebda Uniersit of North
More informationLecture 5: Channel Capacity. Copyright G. Caire (Sample Lectures) 122
Lecture 5: Channel Capacity Copyright G. Caire (Sample Lectures) 122 M Definitions and Problem Setup 2 X n Y n Encoder p(y x) Decoder ˆM Message Channel Estimate Definition 11. Discrete Memoryless Channel
More informationBalanced Partitions of Vector Sequences
Balanced Partitions of Vector Sequences Imre Bárány Benjamin Doerr December 20, 2004 Abstract Let d,r N and be any norm on R d. Let B denote the unit ball with respect to this norm. We show that any sequence
More informationThe Equivalence Between Slepian-Wolf Coding and Channel Coding Under Density Evolution
2534 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL 57, NO 9, SEPTEMBER 2009 The Equialence Between Slepian-Wolf Coding and Channel Coding Under Density Eolution Jun Chen, Da-ke He, and Ashish Jagmohan Abstract
More informationPolar Write Once Memory Codes
Polar Write Once Memory Codes David Burshtein, Senior Member, IEEE and Alona Strugatski Abstract arxiv:1207.0782v2 [cs.it] 7 Oct 2012 A coding scheme for write once memory (WOM) using polar codes is presented.
More informationarxiv: v1 [math.gt] 2 Nov 2010
CONSTRUCTING UNIVERSAL ABELIAN COVERS OF GRAPH MANIFOLDS HELGE MØLLER PEDERSEN arxi:101105551 [mathgt] 2 No 2010 Abstract To a rational homology sphere graph manifold one can associate a weighted tree
More informationfor some error exponent E( R) as a function R,
. Capacity-achieving codes via Forney concatenation Shannon s Noisy Channel Theorem assures us the existence of capacity-achieving codes. However, exhaustive search for the code has double-exponential
More information