Coding over an Erasure Channel with a Large Alphabet Size

Similar documents
Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Refined Coding Bounds for Network Error Correction

Arbitrary Alphabet Size

Lecture 3: Shannon s Theorem

Lecture 14 (03/27/18). Channels. Decoding. Preview of the Capacity Theorem.

Error Probability for M Signals

Tornado and Luby Transform Codes. Ashish Khisti Presentation October 22, 2003

ECE559VV Project Report

Introduction to Information Theory, Data Compression,

The Minimum Universal Cost Flow in an Infeasible Flow Network

Chapter 7 Channel Capacity and Coding

VQ widely used in coding speech, image, and video

Introduction to information theory and data compression

ECE 534: Elements of Information Theory. Solutions to Midterm Exam (Spring 2006)

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

Chapter 7 Channel Capacity and Coding

Maximizing the number of nonnegative subsets

Problem Set 9 Solutions

Power law and dimension of the maximum value for belief distribution with the max Deng entropy

EGR 544 Communication Theory

CS : Algorithms and Uncertainty Lecture 17 Date: October 26, 2016

Entropy Coding. A complete entropy codec, which is an encoder/decoder. pair, consists of the process of encoding or

The lower and upper bounds on Perron root of nonnegative irreducible matrices

Kernel Methods and SVMs Extension

Calculation of time complexity (3%)

Lecture 4: November 17, Part 1 Single Buffer Management

Application of Nonbinary LDPC Codes for Communication over Fading Channels Using Higher Order Modulations

A PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS

Resource Allocation with a Budget Constraint for Computing Independent Tasks in the Cloud

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

Quantum and Classical Information Theory with Disentropy

Low Complexity Soft-Input Soft-Output Hamming Decoder

Estimation: Part 2. Chapter GREG estimation

Finding Primitive Roots Pseudo-Deterministically

Lossy Compression. Compromise accuracy of reconstruction for increased compression.

THE ARIMOTO-BLAHUT ALGORITHM FOR COMPUTATION OF CHANNEL CAPACITY. William A. Pearlman. References: S. Arimoto - IEEE Trans. Inform. Thy., Jan.

Lecture Notes on Linear Regression

On Network Coding of Independent and Dependent Sources in Line Networks

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family

ISSN: ISO 9001:2008 Certified International Journal of Engineering and Innovative Technology (IJEIT) Volume 3, Issue 1, July 2013

Min Cut, Fast Cut, Polynomial Identities

Natural Language Processing and Information Retrieval

Chapter 8 SCALAR QUANTIZATION

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

Convergence of random processes

On the correction of the h-index for career length

The Order Relation and Trace Inequalities for. Hermitian Operators

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

NP-Completeness : Proofs

Foundations of Arithmetic

The Synchronous 8th-Order Differential Attack on 12 Rounds of the Block Cipher HyRAL

Perfect Competition and the Nash Bargaining Solution

Convexity preserving interpolation by splines of arbitrary degree

A new construction of 3-separable matrices via an improved decoding of Macula s construction

arxiv: v1 [math.co] 1 Mar 2014

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

A Comparison between Weight Spectrum of Different Convolutional Code Types

COMPOSITE BEAM WITH WEAK SHEAR CONNECTION SUBJECTED TO THERMAL LOAD

Design and Optimization of Fuzzy Controller for Inverse Pendulum System Using Genetic Algorithm

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering /

Microwave Diversity Imaging Compression Using Bioinspired

VARIATION OF CONSTANT SUM CONSTRAINT FOR INTEGER MODEL WITH NON UNIFORM VARIABLES

MAXIMUM A POSTERIORI TRANSDUCTION

Lecture 17 : Stochastic Processes II

Single-Facility Scheduling over Long Time Horizons by Logic-based Benders Decomposition

Speeding up Computation of Scalar Multiplication in Elliptic Curve Cryptosystem

Simultaneous Optimization of Berth Allocation, Quay Crane Assignment and Quay Crane Scheduling Problems in Container Terminals

Lecture 5 Decoding Binary BCH Codes

COS 521: Advanced Algorithms Game Theory and Linear Programming

Valuated Binary Tree: A New Approach in Study of Integers

Comparison of the Population Variance Estimators. of 2-Parameter Exponential Distribution Based on. Multiple Criteria Decision Making Method

Generalized Linear Methods

Interactive Bi-Level Multi-Objective Integer. Non-linear Programming Problem

A note on almost sure behavior of randomly weighted sums of φ-mixing random variables with φ-mixing weights

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016

Lecture 4. Instructor: Haipeng Luo

Communication Complexity 16:198: February Lecture 4. x ij y ij

An Improved multiple fractal algorithm

Games of Threats. Elon Kohlberg Abraham Neyman. Working Paper

The Gaussian classifier. Nuno Vasconcelos ECE Department, UCSD

State Amplification and State Masking for the Binary Energy Harvesting Channel

Lecture 4: Universal Hash Functions/Streaming Cont d

Pulse Coded Modulation

On the Correlation between Boolean Functions of Sequences of Random Variables

Notes on Frequency Estimation in Data Streams

On the Multicriteria Integer Network Flow Problem

Consider the following passband digital communication system model. c t. modulator. t r a n s m i t t e r. signal decoder.

(1 ) (1 ) 0 (1 ) (1 ) 0

= z 20 z n. (k 20) + 4 z k = 4

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

Entropy of Markov Information Sources and Capacity of Discrete Input Constrained Channels (from Immink, Coding Techniques for Digital Recorders)

Case A. P k = Ni ( 2L i k 1 ) + (# big cells) 10d 2 P k.

Bounds on the Effective-length of Optimal Codes for Interference Channel with Feedback

Week 5: Neural Networks

Structure and Drive Paul A. Jensen Copyright July 20, 2003

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA

The Study of Teaching-learning-based Optimization Algorithm

Channel optimization for binary hypothesis testing

Transcription:

1 Codng over an Erasure Channel wth a Large Alphabet Sze Shervan Fashand, Shahab Oves Gharan and Amr K. Khandan ECE Dept., Unversty of Waterloo, Waterloo, O, Canada, 2L3G1 emal: sfashand,shahab,khandan}@cst.uwaterloo.ca Abstract An erasure channel wth a fxed alphabet sze, where 1, s studed. It s proved that over any erasure channel wth or wthout memory, Maxmum Dstance Separable codes acheve the mnmum probablty of error assumng maxmum lkelhood decodng. Assumng a memoryless erasure channel, the error exponent of codes are compared wth that of random codes. It s shown that the envelopes of these two exponents are dentcal for rates above the crtcal rate. otng the optmalty of codes, t s concluded that random codng s exponentally optmal as long as the block sze satsfes < + 1. 1 I. ITRODUCTIO Erasure channels wth large alphabet szes have recently receved sgnfcant attenton n networkng applcatons. Dfferent erasure channel models are adopted to study the performance of end-to-end connectons over the Internet [1], [2]. In such models, each packet s seen as a 2 b -ary symbol where b s the packet length n bts. In ths work, a memoryless erasure channel wth a fxed but large alphabet sze s consdered. The error probablty over ths channel assumng maxmumlkelhood decodng for and random codebooks are compared and shown to be exponentally dentcal for rates above the crtcal rate. Shannon [3] was the frst who observed that the error probablty for maxmum lkelhood decodng of a random code rand can be upper-bounded by an exponentally decayng functon wth respect to the code block length. Ths exponent s postve as long as the rate stays below the channel capacty R < C. Followng ths result, tghter bounds were proposed n the lterature [4], [5]. Interestngly, ths upper-bound on rand remans vald regardless of the alphabet sze, even n the case where s larger than the block sze e.g. see the steps of the proofs n [6]. There s also a lower-bound on the probablty of error usng random codng whch s known as the sphere packng bound [7]. For channels wth a relatvely small alphabet sze, both the sphere packng bound and the random codng upper-bound on the error probablty are exponentally tght for rates above the crtcal rate. However, the sphere packng bound s not tght f the alphabet sze,, s comparable to the codng block length. For rates below the crtcal rate, modfcatons of random codng are proposed to acheve tghter bounds [8]. 1 Fnancal support provded by ortel and the correspondng matchng funds by the atural Scences and Engneerng Research Councl of Canada SERC, and Ontaro Centres of Excellence OCE are gratefully acknowledged. Maxmum Dstance Separable [9] codes are optmum n the sense that they acheve the largest possble mnmum dstance, d mn, among all block codes of the same sze. Indeed, any [, K] code can be successfully decoded from any subset of ts coded symbols of sze K or more. Ths property makes codes sutable for use over the erasure channels lke the Internet [1], [2], [10]. However, the practcal encodng-decodng algorthms for such codes have uadratc tme complexty n terms of the code block length [11]. Theoretcally, more effcent O log 2 codes can be constructed based on evaluatng and nterpolatng polynomals over specally chosen fnte felds usng Dscrete Fourer Transform [12]. However, n practce these methods can not compete wth the uadratc methods except for extremely large block szes. Recently, a famly of almost- codes wth low encodng-decodng complexty lnear n length s proposed and shown to provde a practcal alternatve for codng over the erasure channels lke the Internet [13]. In these codes, any subset of symbols of sze K1 + ɛ s suffcent to recover the orgnal K symbols wth hgh probablty [13]. Dgtal Fountan codes, based on the dea of almost- codes, are proposed for nformaton multcastng to many users over an erasure channel [14], [15]. In ths work, a memoryless erasure channel wth a fxed but large alphabet sze s studed. Frst, t s proved that block codes offer the mnmum probablty of decodng error over any erasure channel. Then, error exponents of and random codes for a memoryless erasure channel are analyzed and shown to be dentcal at rates above the crtcal rate. Combnng the two results, we conclude that random codes are exponentally as good as codes exponentally optmal over a wde range of rates. The rest of ths paper s organzed as follows. In secton II, the erasure channel model s ntroduced, and the assumpton of large alphabet szes s justfed. Secton III proves that codes are optmum over any erasure channel. Error exponents of codes and random codes over a memoryless erasure channel are compared n secton IV. Fnally, secton V concludes the paper. II. ERASURE CHAEL MODEL The memoryless erasure channel studed n ths work has the alphabet sze and the erasure probablty π see Fg. 1. The alphabet sze s assumed to be fxed and large,.e., 1. ote that all the known codes have alphabets of

2 Fg. 1. Erasure memoryless channel model wth the alphabet sze, probablty of erasure π, and the erasure symbol ξ. a large sze growng at least lnearly wth the block length. Indeed, a conjecture on codes states that for every lnear [, K] code over the Galos feld F, f 1 < K <, then + 1, except when s even and K 3 or K 1, for whch + 2 [16]. To have a feasble code over a channel wth the alphabet sze, the block sze should satsfy + 1. Thus, throughout ths paper, wherever we refer to large block szes, t means can grow large as long as t satsfes the constrant + 1. The descrbed channel model occurs n many practcal scenaros such as the Internet. From an end to end protocol s perspectve, performance of the lower layers n the protocol stack can be modeled as a random channel called an Internet channel. Snce each packet usually ncludes an nternal error detecton mechansm for nstance a Cyclc Redundancy Check, the Internet channel can be modeled as an erasure channel wth packets as symbols [17]. If each packet contans b bts, the correspondng channel wll have an alphabet sze of 2 b whch s huge for typcal packet szes. Therefore, n practcal networkng applcatons, the block sze s usually much smaller than the alphabet sze. Algebrac computatons over Galos felds F of such large cardnaltes s now practcally feasble wth the ncreasng processng power of electronc crcuts. ote that network codng schemes, recently proposed and appled for content dstrbuton over large networks, have a comparable computatonal complexty [18] [20]. III. OPTIMALITY OF CODES OVER ERASURE CHAELS Maxmum Dstance Separable codes are optmum n the sense of achevng the largest possble mnmum dstance, d mn, among all block codes of the same sze [9]. The followng proposton shows that codes are also optmum over any erasure channel n the sense of achevng the mnmum probablty of decodng error. Proposton I. Consder an erasure channel memoryless or wth memory wth the nput vector x X, X, the output vector y X ξ}, and the transton probablty p y x satsfyng: 1 p y j / x j, ξ} x j 0, j. 2 Defnng the erasure dentfer vector e as 1 yj ξ e j 0 otherwse pe x s ndependent of x. Consder a block code of sze [, K] wth euprobable codewords over an erasure channel and an optmum maxmum lkelhood decoder. The code has the mnmum probablty of decodng error among all the [, K] block codes f t s Maxmum Dstance Separable. Proof. Consder a [, K, d] codebook C wth the -ary codewords of length, number of code-words K, and mnmum dstance d. The dstance between two codewords s defned as the number of postons n whch the correspondng symbols are dfferent Hammng dstance. A codeword x C s transmtted and a vector y X ξ} s receved. The number of erased symbols s eual to the Hammng weght of e denoted by we. An error occurs f the decoder decdes for a codeword dfferent from x. Let us assume that the probablty of havng a specfc erasure pattern e s Pe} whch s ndependent of the transmtted codeword depends only on the channel. We assume a specfc erasure vector e of weght m. The decoder decodes the transmtted codeword based on the m correctly receved symbols. We partton the code-book, C, nto m bns, each bn representng a specfc receved vector satsfyng the erasure pattern e. The number of codewords n the th bn s denoted by b e for 1,..., m. Knowng the erasure vector e and the receved vector y, the decoder selects the bn correspondng to y. The set of possble transmtted codewords s eual to the set of codewords n bn all the codewords n bn are euprobable to be transmtted. If b e 1, the transmtted codeword x can be decoded wth no ambguty. Otherwse, the optmum decoder randomly selects one of the b e > 1 codewords n the bn. Thus, the probablty of error s 1 1 b e when bn s selected. Bn s selected f one of the codewords t contans s transmtted. Hence, probablty of selectng bn s eual to be. Based on the above arguments, probablty K of decodng error for the maxmum lkelhood decoder of any codebook,c, s eual to C a b e:wem e:wem e:wem e:wem Pe}Perror e} Pe} Pe} Pe} m, b e>0 1 b+ e K 1 m K 1 1 be b e K where b + e ndcates the number of bns contanng one or more codewords. a follows from the fact that the transmtted 1

3 codeword can be unuely decoded f the number of erasures n the channel s less than the mnmum dstance of the codebook and b follows from the fact that m b e K. Accordng to 1, C s mnmzed for a code-book C f two condtons are satsfed. Frst, the mnmum dstance of C should acheve the maxmum possble value,.e., d K + 1. Second, we should have b + e m for all possble erasure vectors e wth any weght d m. Any code satsfes the frst condton by defnton. Moreover, t s easy to show that for any code, we have b e K +m. We frst prove ths for the case of m K. Consder the bns of an code for any arbtrary erasure pattern e, we K. From the fact that d K + 1 and K b e K, t s concluded that each bn contans exactly one codeword. Therefore, there exsts only one codeword whch matches any K correctly receved symbols. ow, consder any general erasure pattern e, we m > K. For the th bn, concatenatng any K +m arbtrary symbols to the m correctly receved symbols results n a dstnct codeword of the codebook. Havng K +m possbltes to expand the receved m symbols to K symbols, we have b e K +m. Ths completes the proof. A. codes wth Suboptmal Decodng In the proof of proposton I, t s assumed that the receved codewords are decoded based on maxmum lkelhood decodng whch s optmum n ths case. However, n many practcal cases, codes are decoded by smpler decoders [21]. Such suboptmal decoders can perfectly reconstruct the codewords of a [, K] codebook f they receve K or more symbols correctly. In case more than K symbols are erased, a decodng error occurs. Let P denote the probablty of ths event. P s obvously dfferent from the decodng error probablty of the maxmum lkelhood decoder denoted by. Theoretcally, an optmum maxmum lkelhood decoder of an code may stll decode the orgnal codeword correctly wth a postve, but small probablty, f t receves less than K symbols. More precsely, accordng to the proof of Proposton I, such a decoder s able to correctly decode an code over F wth the probablty of 1 after recevng K correct symbols. Of course, for Galos felds wth large cardnalty, ths probablty s usually neglgble. The relatonshp between P and can be summarzed as follows E,ML Hence, E,ML K 1 1 PK symbols receved correctly} K PK symbols receved correctly}. 2 s bounded as 1 1 E,ML. 3 IV. ERROR EXPOETS OF AD RADOM CODES A. Error Exponent of Codes over a Memoryless Erasure Channel Consder a block code of sze [, K] over the memoryless erasure channel of Fg. 1. Let α K defne the codng overhead. For a -ary [, K] code, the rate per symbol, R, s eual to R K log 1 α log. 4 In a block code of length, the number of lost symbols would be e where e s defned n Proposton I. Thus, the probablty of decodng error for the suboptmal decoder of subsecton III-A can be wrtten as } P 1 K 1 P e > α P 5 where P denotes the probablty that packets are receved correctly. Snce e s are..d random varables wth Bernoull dstrbuton, we have P 1 π π. It s easy to see that 0 P + 11 π > 1 for 1,, K 1 P 1 π f α K > π. Accordng to euaton 4, the condton α > π can be rewrtten as R < 1 π log C where C s the capacty of the memoryless erasure channel. Therefore, the summaton terms n euaton 5 are always ncreasng, and the largest term s the last one. ow, we can bound P as P K 1 P KP K 1. The term K 1 n PK 1 can be bounded usng the fact that for any > K > 0, we have [22] 1 + 1 eh K e H K K where the entropy, H K, s computed n nats. Thus, P s bounded as π1 αe uα 1 π + 1α + 1 P π1 α2 2 e uα 1 πα + 1 6 where uα s defned as 0 for α π α1 π uα α log π1 α 1 π log 1 α for π < α 1. wth the log functons computed n the eperan base. Usng euaton 4, the codng error exponent, u., can be expressed n terms of R nstead of α. In 4, K should be an nteger, and we should have + 1 for a feasble code. Thus, the fnest resoluton of rates achevable by a sngle codebook would be R +1 log for 1, 2,...,. Of course, t s also possble to acheve the rates +1 n the ntervals +1 log < R < +1 log by tme sharng between two codebooks of szes [+1, ] and [+1, +1]. However, n such cases, the smaller error exponent belongng 7

4 to the codebook of the sze [ +1, +1] domnates. Therefore, ur wll have a stepwse shape of the form 0 for 1 π r ur 1 π 1 r r log 8 rπ π log for 0 < r 1 π 1 r where r s defned as r 1 + 1R 9 + 1 log B. Random Codng Error Exponent of a Memoryless Erasure Channel It s nterestng to compare the error exponent n 8 wth the random codng error exponent as descrbed n [6]. Ths exponent, E r R, can be wrtten as E r R max 0 ρ 1 ρr + max Q E 0ρ, Q where Q s the nput dstrbuton, and E 0 ρ, Q euals 1 1+ρ 1 E 0 ρ, Q log QkP j k 1 + ρ. j0 k0 } 10 11 Due to the symmetry of the channel transton probabltes, the unform dstrbuton maxmzes 10 over all possble nput dstrbutons. Therefore, E 0 ρ, Q can be smplfed as 1 π E 0 ρ, Q log + π. 12 Solvng the maxmzaton 10, gves us E r R as log 1 π + π r log E r R R ρ for 0 r 1 π 1 r r log log rπ for 1 π R c log π 1 r R c log r 1 π 13 where r log, and R c 1 π+π log are the normalzed and the crtcal rates, respectvely. Comparng 8 and 13, we observe that the codes and the random codes perform exponentally the same for rates between the crtcal rate and the capacty. However, for the regon below the crtcal rate, where the error exponent of the random code decays lnearly wth R, codes acheve a larger error exponent. It s worth notng that ths nterval s neglgble for large alphabet szes. Moreover, the stepwse graph of ur meets ts envelope as the steps are very small for large values of. Fgure 2 depcts the error exponents of random codes and codes for the alphabet szes of 128 and 1024 over an erasure channel wth π 0.015. As observed n Fg. 2a, ur can be approxmated by ts envelope very a b Fg. 2. Error exponents of random codng E rr and codng ur for a memoryless erasure channel wth π 0.015, and a: 128, b: 1024. closely even for a relatvely small alphabet sze 128. For a larger alphabet sze Fg. 2b, the graph of ur almost concdes ts envelope whch euals E r R for the regon above the crtcal rate. Moreover, as observed n Fg. 2b, the regon where codes outperform random codes becomes very small even for moderate values of alphabet sze 1024. C. Exponental Optmalty of Random Codng Usng the sphere packng bound, t s shown that random codng s exponentally optmal for the rates above the crtcal rate over channels wth relatvely small alphabet szes [7]. However, the sphere packng bound s not tght for the channels whose alphabet sze,, s comparable to the block length. Here, based on Proposton I and the results of secton IV, we prove the exponental optmalty of random codng for erasure channels satsfyng + 1 >. Decodng error probablty for a random codebook wth the maxmum-lkelhood decodng can be upper bounded as rand a ErR b e e ur 14

5 where a follows from [6], and b s vald only for rates above the crtcal rate accordng to 8 and 13. We can also lower bound P rand E,ML as rand a b 1 1 P c 1 1 πre ur 1 π + 1 1 r + 1 15 where a follows from Proposton I, b from neualty 3, and c from neualty 6. Combnng 14 and 15 guarantees that both the upperbound and the lower-bound on P rand E,ML and the decayng exponent of rand Moreover, we can wrte a rand b are exponentally tght, versus s ndeed ur. 1 π + 1 r + 1 1 1 πr E,ML 16 where a follows from Proposton I, and b results from neualtes 14 and 15. Snce the coeffcent of n neualty 16 does not nclude any exponental terms, t can be concluded that for rates above the crtcal rate, random codes perform exponentally the same as codes, whch are already shown to be optmum. V. COCLUSIO Performance of random codes and codes over an erasure channel wth a fxed but large alphabet sze s analyzed. It s shown that codes mnmze the probablty of decodng error usng maxmum-lkelhood decodng, and any erasure channel wth or wthout memory. Then, the decodng error probablty of and random codes are bounded by exponental terms, and the correspondng exponents are compared. It s observed that the error exponents are dentcal over a wde range of rates. Knowng codes are optmum, t s concluded that random codng s exponentally optmal over a memoryless erasure channel as long as the code block length,, does not exceed the alphabet sze of the channel by more than one. [8] G. Forney, Exponental Error Bounds for Erasure, Lst, and Decson Feedback Schemes, IEEE Transactons on Informaton Theory, vol. 14, no. 2, pp. 206 220, 1968. [9] Ron M. Roth, Introducton to Codng Theory, 1st ed. Cambrdge Unversty Press, 2006, pp. 333 351. [10] X. H. Peng, Erasure-control Codng for Dstrbuted etworks, IEE Proceedngs on Communcatons, vol. 152, pp. 1075 1080, 2005. [11]. Alon, J. Edmonds, and M. Luby, Lnear Tme Erasure Codes wth early Optmal Recovery, n IEEE Symposum on Foundatons of Computer Scence, Proc. IEEE Vol. 3, 1995, pp. 512 519. [12] J. Justesen, On the complexty of decodng Reed-Solomon codes, IEEE transactons on nformaton theory, vol. 22, no. 2, pp. 237 238, 1993. [13] M. G. Luby, M. Mtzenmacher, M. A. Shokrollah, and D. A. Spelman, Effcent Erasure Correctng Codes, IEEE Transactons on Informaton Theory, vol. 47, no. 2, pp. 569 584, 2001. [14] M. G. Luby, LT Codes, n IEEE Symposum on the Foundatons of Computer Scence FOCS, 2002, pp. 271 280. [15] A. Shokrollah, Raptor Codes, IEEE Transactons on Informaton Theory, vol. 52, no. 6, pp. 2551 2567, 2006. [16] J. L. Walker, A ew Approach to the Man Conjecture on Algebracgeometrc Codes, Journal of Desgns, Codes and Cryptography, vol. 9, no. 1, pp. 115 120, 1996. [17] S. Fashand, S. Ovesgharan, and A.K. Khandan, Path Dversty n Packet Swtched etworks: Performance Analyss and Rate Allocaton, n IEEE Global Telecommuncatons Conference, GLOBECOM 07, 2007, pp. 1840 1844. [18] R. Koetter and M. Medard, An algebrac approach to network codng, IEEE transactons on etworkng, vol. 11, no. 5, pp. 782 795, 2003. [19] P. A. Chou, Y. Wu, and K. Jan, Practcal etwork Codng, n 51st Allerton Conference on Communcaton, Control and Computng, 2003. [20] C. Gkantsds and P. R. Rodrguez, etwork codng for large scale content dstrbuton, n IEEE IFOCOM, Proc. IEEE Vol. 4, 2005, pp. 2235 2245. [21] Ron M. Roth, Introducton to Codng Theory, 1st ed. Cambrdge Unversty Press, 2006, pp. 183 204. [22] T. Cover and J. Thomas, Elements of Informaton Theory, 1st ed. ew York: Wley, 2006, pp. 284 285. REFERECES [1] W. T. Tan and A. Zakhor, Vdeo Multcast Usng Layered FEC and Scalable Compresson, IEEE Transactons on Crcuts and Systems for Vdeo Technology, vol. 11, no. 3, pp. 373 386, 2001. [2] L. Darane, L. Lancrca, J. Lacan, and J. Fmes, Content-Access QoS n Peer-to-Peer etworks Usng a Fast Erasure Code, Elsever Computer Communcatons, vol. 28, no. 15, pp. 1778 1790, 2005. [3] C. E. Shannon, A Mathematcal Theory of Communcatons, Bell Systems Techncal Journal, vol. 27, pp. 379 423,623 656, 1948. [4] P. Elas, Codng for osy Channels, IRE Conventon Record, vol. 4, pp. 37 46, 1955. [5] C. E. Shannon, R. G. Gallager, and E. R. Berlekamp, Lower bounds to Error Probablty for Codng on Dscrete Memoryless Channels, Informaton and Control, vol. 10, pp. 65 103,522 552, 1967. [6] R. G. Gallager, Informaton Theory and Relable Communcaton, 1st ed. ew York, Y, USA: John Wley & Sons, 1968, pp. 135 144. [7], Informaton Theory and Relable Communcaton, 1st ed. ew York, Y, USA: John Wley & Sons, 1968, pp. 157 158.