Concatenated codes can achieve list-decoding capacity

Size: px
Start display at page:

Download "Concatenated codes can achieve list-decoding capacity"

Transcription

1 Electronic Colloquium on Computational Complexity, Report No. 54 (2008 Concatenated codes can acieve list-decoding capacity Venkatesan Guruswami Department of Computer Science and Engineering, University of Wasington, Seattle, WA Atri Rudra Department of Computer Science and Engineering, University at Buffalo, Te State University of New York, Buffalo, NY Abstract We prove tat binary linear concatenated codes wit an outer algebraic code (specifically, a folded Reed-Solomon code and independently and randomly cosen linear inner codes acieve te list-decoding capacity wit ig probability. In particular, for any 0 < ρ < 1/2 and ε > 0, tere exist concatenated codes of rate at least 1 H(ρ ε tat are (combinatorially listdecodable up to a ρ fraction of errors. (Te best possible rate, aka list-decoding capacity, for suc codes is 1 H(ρ, and is acieved by random codes. A similar result, wit better list size guarantees, olds wen te outer code is also randomly cosen. Our metods and results extend to te case wen te alpabet size is any fixed prime power q 2. Our result sows tat despite te structural restriction imposed by code concatenation, te family of concatenated codes is ric enoug to include capacity acieving list-decodable codes. Tis provides some encouraging news for tackling te problem of constructing explicit binary listdecodable codes wit optimal rate, since code concatenation as been te preeminent metod for constructing good codes over small alpabets. A preliminary version of tis paper appears in te Proceedings of te Nineteent Annual ACM-SIAM Symposium on Discrete Algoritms (SODA08. Researc supported by Sloan and Packard Fellowsips, and NSF Career Award CCF Tis work was done wile te autor was at te University of Wasington and supported by NSF CCF ISSN

2 1 Introduction Concatenated Codes. Ever since its discovery and initial use by Forney [3], code concatenation as been a powerful tool for constructing error-correcting codes. At its core, te idea of code concatenation is really simple and natural. A concatenated code over a small alpabet, say a binary code for definiteness, is constructed in two steps. In te first step, te message is encoded via an error-correcting code C 1 over a large alpabet, say a large finite field F 2 m. C 1 is referred to as te outer code. Eac of te symbols of te resulting codeword of C 1 is ten encoded via a binary code C 2 tat as 2 m codewords (corresponding to te 2 m outer codeword symbols. Te code C 2 is referred to as te inner code. Te popularity of code concatenation arises due to te fact tat is often difficult to give a direct construction of good (long binary codes. On te oter and, over large alpabets, an array of powerful algebraic constructions (suc as Reed-Solomon and algebraic-geometric codes wit excellent parameters are available. Wile te concatenated construction still requires an inner code tat is binary, tis is a small/sort code wit block lengt O(m, wic is typically logaritmic or smaller in te lengt of te outer code. A good coice for te inner code can terefore be found efficiently by a brute-force searc, leading to a polynomial time construction of te final concatenated code. Tis paradigm draws its power from te fact tat a concatenated code, rougly speaking, inerits te good features of bot te outer and inner codes. For example, te rate of te concatenated code is te product of te rates of te outer and inner codes, and te minimum distance is at least te product of te distances of te outer and inner codes. Te alpabet of te concatenated code equals tat of te inner code. Above, we assumed tat all te inner codes were identical. Tis is not necessary and one can use different inner codes for encoding different symbols of te outer codeword. One way to leverage tis is to use an explicit ensemble of inner codes most of wic are good. Tis was te idea beind Justesen s celebrated explicit construction of asymptotically good binary codes [11]. In tis work, we will use random, i.i.d. coices for te different inner codes, and te independence of te inner encodings will be crucial in our analysis. By concatenating an outer Reed-Solomon code of ig rate wit sort inner codes acieving Sannon capacity (known to exist by a random coding argument, Forney [3] gave a construction of binary linear codes tat acieve te capacity of te binary symmetric cannel wit a polynomial time decoding complexity. By using as outer code a linear time encodable/decodable code, one can make te encoding/decoding complexity linear in te block lengt [14]. In comparison, Sannon s nonconstructive proof of is capacity teorem used an exponential time maximum likeliood decoder. Te List Decoding Context. Our focus in tis work is on te worst-case error model, wit te goal being to recover from an arbitrary fraction ρ of errors wit te best possible rate. In tis setting, notions suc as minimum distance and list decoding become central, and concatenated codes ave been te key tool in many developments concerning tese notions. (All te basic coding teory notions are formally defined in Sections In fact, for te longest time, till te work on expander codes by Sipser and Spielman [13], code concatenation scemes gave te only known explicit construction of a family of asymptotically good codes (i.e., wit rate and relative distance bot bounded away from zero as te block lengt grew. Even today, te best trade-offs between rate and distance for explicit codes are acieved by variants of concatenated codes; see [2] for furter details. 2

3 Let us consider te problem of constructing a family of binary codes for correcting a ρ fraction of worst-case errors, for some 0 < ρ < 1/2. For large n, tere are about 2 H(ρn binary strings of weigt ρn, were H(ρ = ρlog 2 ρ (1 ρlog 2 (1 ρ is te binary entropy function. Terefore, wen up to a ρ fraction of symbols can be corrupted, a transmitted codeword c can get distorted into any one of about 2 H(ρn possible received words. Since te decoder must be able to associate c wit all suc received words, it is easy to argue tat tere can be at most about 2 (1 H(ρn codewords. In oter words, te rate of te code must be at most 1 H(ρ. Peraps surprisingly, te above simplistic upper bound on rate is in fact accurate, and at least non-constructively, a rate arbitrarily close to 1 H(ρ can be realized. In fact, wit ig probability a completely random code of rate (1 H(p ε, obtained by picking 2 (1 H(p εn codewords randomly and independently, as te property tat every Hamming ball of radius ρn as at most O(1/ε codewords. One can tus use suc a code to (list decode from a ρ fraction of errors, were in te worst-case te decoder may output a list of O(1/ε answers. Te trade-off R = 1 H(ρ between te rate R and fraction of errors ρ is called te list-decoding capacity. Te coice of binary alpabet in tis discussion is only for definiteness. Over an alpabet size q 2, te list-decoding capacity equals 1 H q (ρ, were H q ( is te q-ary entropy function. Unfortunately, te above is a nonconstructive argument and te codes acieving list-decoding capacity are sown to exist by a random coding argument, and are not even succinctly, let alone explicitly, specified. It can also be sown tat random linear codes acieve list-decoding capacity, toug te known proofs only acieve a list size of 2 O(1/ε wen te rate is witin ε of te listdecoding capacity. 1 Te advantage wit linear codes is tat being subspaces tey can be described succinctly by a basis for te subspace (called generator matrix in coding parlance. Yet, a generic linear code offers little in terms of algoritmically useful structure, and in general only brute-force decoders running in exponential time are known for suc a code. Turning to constructive results for list decoding, recently explicit codes approacing list-decoding capacity togeter wit polynomial time list-decoding algoritms were constructed over large alpabets [9]. Using tese as outer codes in a concatenation sceme led to polynomial time constructions of binary codes tat acieved a rate vs. list-decoding radius trade-off called te Zyablov bound [9]. By using a multilevel generalization of code concatenation, te trade-off was recently improved to te so-called Blok-Zyablov bound [8]. Still, tese explicit constructions fall well sort of acieving te list-decoding capacity for binary (and oter small alpabet codes, wic remains a callenging open problem. Given te almost exclusive strongold of concatenated codes on progress in explicit constructions of list-decodable codes over small alpabets, te following natural question arises: Do tere exist concatenated codes tat acieve list-decoding capacity, or does te stringent structural restriction imposed on te code by concatenation preclude acieving list-decoding capacity? Te natural way to analyze te list-decoding performance of concatenated codes suggests tat peraps concatenation is too strong a structural bottleneck to yield optimal list-decodable codes. Suc an analysis proceeds by decoding te blocks of te received word corresponding to various inner encodings, wic results in a small set S i of possible symbols for eac position i of te outer code. One ten argues tat tere cannot be too many outer codewords wose i t symbol 1 For te case of binary alpabet alone, it is sown in [5], tat a list size of O(1/ε suffices. But tis result is not known to old wit ig probability. 3

4 belongs to S i for many positions i (tis is called a list recovery bound. 2 Even assuming optimal, capacity-acieving bounds on te individual list-decodability of te outer and inner codes, te above two-stage analysis bottlenecks at te Zyablov bound. 3 Te weakness of te two-stage analysis is tat it treats te different inner decodings independently, and fails to exploit te fact tat te various inner blocks encode a structured set of symbols, namely tose arising in a codeword of te outer code. Exploiting tis and arguing tat te structure of te outer codewords prevents many bad inner blocks from occurring simultaneously, and using tis to get improved bounds, owever, seems like an intricate task. In part tis is because te current understanding of bad list-decoding configurations, i.e., Hamming balls of small radius containing many codewords, for codes is rater poor. Our Results. In tis work, we prove tat tere exist binary (and q-ary for any fixed prime power q linear concatenated codes tat acieve list-decoding capacity for any desired rate. In fact, we prove tat a random concatenated code drawn from a certain ensemble acieves capacity wit overwelming probability. Tis is encouraging news for te eventual goal of acieving listdecoding capacity (or at least, going beyond te above-mentioned Blok-Zyablov bottleneck over small alpabets wit polynomial time decodable codes. Te outer codes in our construction are te folded Reed-Solomon codes wic were sown in [9] to ave near-optimal list-recoverability properties. 4 Te inner codes for te various positions are random linear codes (wic can even ave a rate of 1, wit a completely independent random coice for eac outer codeword position. To get witin ε of list decoding capacity, our result guarantees an output list size bound tat is a large polynomial (greater tan N 1/ε in te block lengt N. We also prove tat one can acieve capacity wen a random linear code is cosen for te outer code; we get a better list size upper bound of a constant depending only on ε in tis case. A corollary of our result is tat one can construct binary codes acieving list-decoding capacity wit a number of random bits tat grows quasi-linearly in te block lengt, compared to te quadratic bound (acieved by a random linear code known earlier. Our results are inspired by results of Blok and Zyablov [1] and Tommesen [15] sowing te existence of binary concatenated codes wose rate vs. distance trade-off meets te Gilbert- Varsamov (GV bound. We recall tat te GV bound is te best known trade-off between rate and relative distance for binary (and q-ary for q < 49 codes and is acieved w..p. by random linear codes. Blok and Zyablov sow te result for independent random coices for te outer code and te various inner encodings. Tommesen establises tat one can fix te outer code to be a Reed-Solomon code and only pick te inner codes randomly (and independently. Organization of te paper. Section 2 establises te necessary background needed for te subsequent sections. We give a ig level overview of our proof and ow it compares wit Tommesen s 2 Wen te outer code is algebraic suc as Reed-Solomon or folded Reed-Solomon, te list recovery step admits an efficient algoritm wic leads to a polynomial time list-decoding algoritm for te concatenated code, suc as in [9, 8]. 3 One can squeeze out a little more out of te argument and acieve te Blok-Zyablov bound, by exploiting te fact tat sub-codes of te inner codes, being of lower rate, can be list decoded to a larger radius [8]. 4 We note tat te excellent list-recoverability of folded Reed-Solomon codes is crucial for our argument, and we do not know ow to prove a similar result using just Reed-Solomon codes as outer codes. 4

5 proof in Section 3. We present our results for concatenated codes wit folded Reed-Solomon and random linear codes as outer codes in Sections 4 and 5 respectively. We conclude wit some open questions in Section 6. 2 Preliminaries For an integer m 1, we will use [m] to denote te set {1,...,m}. 2.1 q-ary Entropy and Related Functions Let q 2 be an integer. H q (x = xlog q (q 1 xlog q x (1 xlog q (1 x will denote te q-ary entropy function. We will make use of te following property of tis function. Lemma 1 ([12]. For every 0 y 1 1/q and for every small enoug ε > 0, we ave Hq 1 (y ε 2 /c q Hq 1 (y ε, were c q 1 is a constant tat depends only on q. For 0 z 1 define We will need te following property of te function above. α q (z = 1 H q (1 q z 1. (1 Lemma 2. Let q 2 be an integer. For every 0 z 1, α q (z z. Proof. Te proof follows from te subsequent sequence of relations: α q (z = 1 H q (1 q z 1 = 1 (1 q z 1 log q (q 1 + (1 q z 1 log q (1 q z 1 + q z 1 (z 1 ( ( q 1 = zq z 1 + (1 q z 1 1 log q 1 q z 1 z, were te last inequality follows from te facts tat q z 1 1 and 1 q z 1 1 1/q, wic implies tat log q ( q 1 1 q z 1 1. We will also consider te following function f x,q (θ = (1 θ 1 Hq 1 (1 θx, were 0 θ,x 1. We will need te following property of tis function, wic was proven in [15] for te q = 2 case. Te following is an easy extension of te result for general q. For te sake of completeness, we provide a proof in Appendix A. Lemma 3 ([15]. Let q 2 be an integer. For any x 0 and 0 y α q (x/x, min f x,q(θ = (1 y 1 Hq 1 (1 xy. 0 θ y 5

6 2.2 Basic Coding Definitions A code of dimension k and block lengt n over an alpabet Σ is a subset of Σ n of size Σ k. Te rate of suc a code equals k/n. Eac vector in C is called a codeword. In tis paper, we will focus on te case wen Σ is a finite field. We will denote by F q te field wit q elements. A code C over F q is called a linear code if C is a subspace of F n q. In tis case te dimension of te code coincides wit te dimension of C as a vector space over F q. By abuse of notation we will also tink of a code C as a map from elements in F k q to teir corresponding codeword in F n q. If C is linear, tis map is a linear transformation, mapping a row vector x F k q to a vector xg Fn q for a k n matrix G over F q called te generator matrix. Te Hamming distance between two vectors in Σ n is te number of places tey differ in. Te (minimum distance of a code C is te minimum Hamming distance between any two pairs of distinct codewords from C. Te relative distance is te ratio of te distance to te block lengt. 2.3 Code Concatenation Concatenated codes are constructed from two different kinds of codes tat are defined over alpabets of different sizes. Say we are interested in a code over F q (in tis paper, we will always tink of q 2 as being a fixed constant. Ten te outer code C out is defined over F Q, were Q = q k for some positive integer k and as block lengt N. Te second type of code, called te inner codes, wic are denoted by Cin 1,...,CN in are defined over F q and are eac of dimension k (note tat te message space of Cin i for all i and te alpabet of C out ave te same size. Te concatenated code, denoted by C = C out (Cin 1,...,CN in, is defined as follows. Let te rate of C out be R and let te block lengts of Cin i be n (for 1 i N. Define K = RN and r = k/n. Te input to C is a vector m = m 1,...,m K (F k q K. Let C out (m = x 1,...,x N. Te codeword in C corresponding to m is defined as follows C(m = Cin 1 (x 1,Cin 2 (x 2,...,Cin N (x N. Te outer code C out will eiter be a random linear code over F Q or te folded Reed-Solomon code from [9]. In te case wen C out is random, we will pick C out by selecting K = RN vectors uniformly at random from F N Q to form te rows of te generator matrix. For every position 1 i N, we will coose an inner code Cin i to be a random linear code over F q of block lengt n and rate r = k/n. In particular, we will work wit te corresponding generator matrices G i, were every G i is a random k n matrix over F q. All te generator matrices G i (as well as te generator matrix for C out, wen we coose a random C out are cosen independently. Tis fact will be used crucially in our proofs. Given te outer code C out and te inner codes Cin i, recall tat for every codeword u = (u 1,...,u N C out, te codeword ug def = (u 1 G 1,u 2 G 2,...,u N G N is in C = C out (Cin 1,...,CN in, were te operations are over F q. We will need te following notions of te weigt of a vector. Given a vector v F nn q, its Hamming weigt is denoted by wt(v. Given a vector y = (y 1,...,y N (F n q N and a subset S [N], we will use wt S (y to denote te Hamming weigt over F q of te subvector (y i i S. Note tat wt(y = wt [N] (y. We will need te following simple lemma due to Tommesen, wic is stated in a sligtly different form in [15]. For te sake of completeness we also present its proof. 6

7 Lemma 4 ([15]. Given a fixed outer code C out of block lengt N and an ensemble of random inner linear codes of block lengt n given by generator matrices G 1,...,G N te following is true. Let y F nn q. For any codeword u C out, any non-empty subset S [N] suc tat u i 0 for all i S ( and any integer n S 1 1 q : ( ( Pr[wt S (ug y ] q n S 1 H q n S, were te probability is taken over te random coices of G 1,...,G N. Proof. Let S = s and w.l.o.g. assume tat S = [s]. As te coices for G 1,...,G N are made independently, it is enoug to sow tat te claimed probability olds for te random coices for G 1,...,G s. For any 1 i s and any y F n q, since u i 0, we ave Pr Gi [u i G i = y] = q n. Furter, tese probabilities are independent for every i. Tus, for any y = y 1,...,y s (F n q s, Pr G1,...,G s [u i G i = y i for every 1 i s] = q ns. Tis implies tat: Pr G1,...,G s [wt S (ug y ] = q ns j=0 ( ns (q 1 j. j Te claimed result follows from te following well known inequality for /(ns 1 1/q ([10]: j=0 ( ns (q 1 j q nshq( ns. j 2.4 List Decoding and List Recovery Definition 1 (List decodable code. For 0 < ρ < 1 and an integer L 1, a code C F n q is said to be (ρ,l-list decodable if for every y F n q, te number of codewords in C tat are witin Hamming distance ρn from y is at most L. We will also crucially use a generalization of list decoding called list recovery, a term first coined in [6] even toug te notion ad existed before. List recovery as been extremely useful in listdecoding concatenated codes. Te input for list recovery is not a sequence of symbols but rater a sequence of subsets of allowed codeword symbols, one for eac codeword position. Definition 2 (List recoverable code. A code C F n q, is called (ρ,l,l-list recoverable if for every sequence of sets S 1,S 2,...,S n, were S i F q and S i l for every 1 i n, tere are at most L codewords (c 1,...,c n C suc tat c i S i for at least (1 ρn positions i. Te classical family of Reed-Solomon (RS codes over a field F are defined to be te evaluations of low-degree polynomials at a sequence of distinct points of F. Folded Reed-Solomon codes are obtained by viewing te RS code as a code over a larger alpabet F s by bundling togeter consecutive s symbols for some folding parameter s. We will not need any specifics of folded RS codes (in fact even its definition beyond (i te strong list recovery property guaranteed by te following 7

8 teorem from [9], and (ii te fact tat specifying any K + 1 positions in a dimension K folded Reed-Solomon code suffices to identify te codeword (equivalently, a dimension K and lengt N folded RS code as distance at least N K. Teorem 1 ([9]. For every integer l 1, for all constants ε > 0, for all 0 < R < 1, and for every prime p, tere is an explicit family of folded Reed-Solomon codes, over fields of caracteristic p tat ave rate at least R and wic can be (1 R ε,l,l(n-list recovered in polynomial time, were for codes of block lengt N, L(N = (N/ε 2 O(ε 1 log(l/r and te code is defined over alpabet of size (N/ε 2 O(ε 2 log l/(1 R. 3 Overview of te Proof Our proofs are inspired by Tommesen s proof [15] of te following result concerning te rate vs. distance trade-off of concatenated codes: Binary linear concatenated codes wit an outer Reed- Solomon code and independently and randomly cosen inner codes meet te Gilbert-Varsamov bound wit ig probability 5, provided a moderate condition on te outer and inner rates is met. Given tat our proof builds on te proof of Tommesen, we start out by reviewing te main ideas in is proof. Te outer code C out in [15] is a Reed-Solomon code of lengt N and rate R (over F Q were Q = q k for some integer k 1. Te inner linear codes (over F q are generated by N randomly cosen k n generator matrices G = (G 1,...,G N, were r = k/n. Note tat since te final code will be linear, to sow tat wit ig probability te concatenated code will ave distance close to Hq 1 (1 rr, it is enoug to sow tat te probability of te Hamming weigt of ug over F q being at most (Hq 1 (1 rr εnn (for every non-zero Reed-Solomon codeword u = (u 1,...,u N and ε > 0, is small. Fix a codeword u C out. Now note tat if for some 1 i N, u i = 0, ten for every coice of G i, u i G i = 0. Tus, only te non-zero symbols of u contribute to wt(ug. Furter, for a non-zero u i, u i G i takes all te values in F n q wit equal probability over te random coices of G i. Since te coice of te G i s are independent, tis implies tat ug takes eac of te possible q n wt(u values in F nn q wit te same probability. Tus, te total probability tat ug as a Hamming weigt of at most is ( ( n wt(u (1 H q n wt(u q n wt(u q w=0 w n wt(u (tis is Lemma 4 for te case S = [N]. Te rest of te argument follows by doing a careful union bound of tis probability for all non zero codewords in C out, using te weigt distribution of te RS code. Tis step imposes an upper bound on te outer rate R (specifically, R α q (r/r, but still offers enoug flexibility to acieve any desired value in (0, 1 for te overall rate rr (even wit te coice r = 1, i.e., wen te inner encodings don t add any redundancy. Let us now try to extend te idea above to sow a similar result for list decoding. We want to sow tat for any Hamming ball of radius at most = (Hq 1 (1 rr εnn as at most L codewords from te concatenated code C (assuming we want to sow tat L is te worst case list size. To sow tis let us look at a set of L + 1 codewords from C and try to prove tat te 5 A q-ary code of rate R meets te Gilbert-Varsamov bound if it as relative distance at least Hq 1 (1 R. 8

9 probability tat all of tem lie witin some fixed ball B of radius is small. Let u 1,...,u L+1 be te corresponding codewords in C out. Extending Tommesen s proof would be straigtforward if te events corresponding to u j G belonging to te ball B for various 1 j L + 1 were independent. In particular, if we can sow tat for every position 1 i N, all te non-zero symbols in {u 1 i,u2 i,...,ul+1 i } are linearly independent over F q ten te generalization of Tommesen s proof is immediate. Unfortunately, te notion of independence discussed above does not old for every L + 1 tuple of codewords from C out. Te natural way to get independence wen dealing wit linear codes is to look at messages tat are linearly independent. It turns out tat if C out is also a random linear code over F Q ten we ave a good approximation of te te notion of independence above. Specifically, we sow tat wit very ig probability for a linearly independent (over F Q set of messages 6 m 1,...,m L+1, te set of codewords u 1 = C out (m 1,...,u L+1 = C out (m L+1 ave te following approximate independence property. For many positions 1 i N, many non-zero symbols in {u 1 i,...,ul+1 i } are linearly independent over F q. It turns out tat tis approximate notion of independence is enoug for Tommesen s proof to go troug. We remark tat te notion above crucially uses te fact tat te outer code is a random linear code. Te argument gets more tricky wen C out is fixed to be (say te Reed-Solomon code. Now even if te messages m 1,...,m L+1 are linearly independent it is not clear tat te corresponding codewords will satisfy te notion of independence in te above paragrap. Interestingly, we can sow tat tis notion of independence is equivalent to sowing good list recoverability properties for C out. Reed-Solomon codes are owever not known to ave optimal list recoverability (wic is wat is required in our case. In fact, te results in [7] sow tat tis is impossible for Reed- Solomon codes in general. However, folded RS codes do ave near-optimal list recoverability and we exploit tis in our proof. 4 Using Folded Reed-Solomon Code as Outer Code In tis section, we will prove tat concatenated codes wit te outer code being te folded Reed- Solomon code from [9] and using random and independent inner codes can acieve list-decoding capacity. Te proof will make crucial use of te list recoverability of te outer code as stated in Teorem Linear Independence from List Recoverability Definition 3 (Independent tuples. Let C be a code of block lengt N and rate R defined over F q k. Let J 1 and 0 d 1,...,d J N be integers. Let d = d 1,...,d J. An ordered tuple of codewords (c 1,...,c J, c j C is said to be (d, F q -independent if te following olds. d 1 = wt(c 1 and for every 1 < j J, d j is te number of positions i suc tat c j i is F q -independent of te vectors {c 1 i,...,cj 1 i }, were c l = (c l 1,...,cl N. 6 Again any set of L + 1 messages need not be linearly independent. However, it is easy to see tat some subset of J = log Q (L + 1 of messages are indeed linearly independent. Hence, we can continue te argument by replacing L + 1 wit J. 9

10 Note tat for any tuple of codewords (c 1,...,c J tere exists a unique d suc tat it is (d, F q - independent. Te next result will be crucial in our proof. Lemma 5. Let C be a folded Reed-Solomon code of block lengt N tat is defined over F Q wit Q = q k as guaranteed by Teorem 1. For any L-tuple of codewords from C, were L J (N/ε 2 O(ε 1 J log(q/r (were ε > 0 is same as te one in Teorem 1, tere exists a sub-tuple of J codewords suc tat te J-tuple is (d, F q -independent, were d = d 1,...,d J wit d j (1 R εn, for every 1 j J. Proof. Te proof is constructive. In particular, given an L-tuple of codewords, we will construct a J sub-tuple wit te required property. Te correctness of te procedure will inge on te list recoverability of te folded Reed-Solomon code as guaranteed by Teorem 1. We will construct te final sub-tuple iteratively. In te first step, pick any non-zero codeword in te L-tuple call it c 1. As C as distance at least (1 RN (and 0 C, c 1 is non-zero in at least d 1 (1 RN > (1 R εn many places. Note tat c 1 is vacuously independent of te previous codewords in tese positions. Now, say tat te procedure as cosen codewords c 1,...,c s suc tat te tuple is (d, F q -independent for d = d 1,...,d s, were for every 1 j s, d j (1 R εn. For every 1 i N, define S i to be te F q -span of te vectors {c 1 i,...,cs i } in F k q. Note tat S i q s. Call c = (c 1,...,c N C to be a bad codeword, if tere does not exist any d s+1 (1 R εn suc tat (c 1,...,c s,c is (d, F q -independent for d = d 1,...,d s+1. In oter words, c is a bad codeword if and only if some T [N] wit T = (R + εn satisfies c i S i for every i T. Put differently, c satisfies te condition of being in te output list for list recovering C wit input S 1,...,S N and agreement fraction R+ε. Tus, by Teorem 1, te number of suc bad codewords is U = (N/ε 2 O(ε 1 slog(q/r (N/ε 2 O(ε 1 J log(q/r, were J is te number of steps for wic tis greedy procedure can be applied. Tus, as long as at eac step tere are strictly more tan U codewords from te original L-tuple of codewords left, we can continue tis greedy procedure. Note tat we can continue tis procedure J times, as long as J L/U. Finally, we will need te following bound on te number of independent tuples for folded Reed- Solomon codes. Its proof follows from te fact tat a codeword in a dimension K folded RS code is completely determined once values at K + 1 of its positions are fixed. Lemma 6. Let C be a folded Reed-Solomon code of block lengt N and rate 0 < R < 1 tat is defined over F Q, were Q = q k. Let J 1 and 0 d 1,...,d J N be integers and define d = d 1,...,d J. Ten te number of (d, F q -independent tuples in C is at most J q NJ(J+1 Q max(d j N(1 R+1,0. Proof. Given a tuple (c 1,...,c J tat is (d, F q -independent, define T j [N] wit T j = d j, for 1 j J to be te set of positions i, were c j i is F q-independent of {c 1 i,...,cj 1 i }. We will estimate te number of (d, F q -independent tuples by first estimating a bound U j on te number of coices for te j t codeword in te tuple (given a fixed coice of te first j 1 codewords. To complete te proof, we will sow tat U j q N(J+1 Q max(d j N(1 R+1,0. 10

11 A codeword c C can be te j t codeword in te tuple in te following way. For every position in [N] \ T j, c can take at most q j 1 q J values (as in tese position te value as to lie in te F q span of te values of te first j 1 codewords in tat position. Since C is folded Reed-Solomon, once we fix te values at positions in [N]\T j, te codeword will be completely determined once any max(rn (N d j +1,0 = max(d j N(1 R+1,0 positions in T j are cosen (w.l.o.g. assume tat tey are te first so many positions. Te number of coices for T j is ( N d j 2 N q N. Tus, we ave as desired. U j q N (q J N dj Q max(d j N(1 R+1,0 q N(J+1 Q max(d j N(1 R+1,0, 4.2 Te Main Result Teorem 2 (Main. Let q be a prime power and let 0 < r 1 be an arbitrary rational. Let 0 < ε < α q (r an arbitrary real, were α q (r is as defined in (1, and 0 < R (α q (r ε/r be a rational. Let k,n,k,n 1 be large enoug integers suc tat k = rn and K = RN. Let C out be a folded Reed-Solomon code over F q k of block lengt N and rate R. Let C 1 in,...,cn in be random linear codes over F q, were C i in is generated by a random k n matrix G i over F q and te random coices for G 1,...,G N are all independent. 7 Ten te concatenated code C = C out (Cin 1,...,CN in (H is a q 1 (1 Rr ε, ( N O(r 2 ε 4 (1 R 2 log(1/r -list decodable code wit ε 4 probability at least 1 q Ω(nN over te coices of G 1,...,G N. Furter, C as rate rr w..p. Remark 1. For any desired rate R (0,1 ε for te final concatenated code (ere ε > 0 is arbitrary, one can pick te outer and inner rates R,r suc tat Rr = R wile also satisfying R (α q (r ε/r. In fact we can pick r = 1 and R = R so tat te inner encodings are linear transformations specified by random k k matrices and do not add any redundancy. Te rest of tis section is devoted to proving Teorem 2. Define Q = q k. Let L be te worst-case list size tat we are sooting for (we will fix its value at te end. By Lemma 5, any L + 1-tuple of C out codewords (u 0,...,u L (C out L+1 contains at least J = (L + 1/(N/γ 2 O(γ 1 J log(q/r codewords tat form an (d, F q -independent tuple, for some d = d 1,...,d J, wit d j (1 R γn (we will specify γ, 0 < γ < 1 R, later. Tus, to prove te teorem it suffices to sow tat wit ig probability, no Hamming ball in F nn q of radius (Hq 1 (1 rr εnn contains a J-tuple of codewords (u 1 G,...,u J G, were (u 1,...,u J is a J-tuple of folded Reed-Solomon codewords tat is (d, F q -independent. For te rest of te proof, we will call a J-tuple of C out codewords (u 1,...,u J a good tuple if it is (d, F q -independent for some d = d 1,...,d J, were d j (1 R γn for every 1 j J. Define ρ = Hq 1 (1 Rr ε. For every good J-tuple of C out codewords (u 1,...,u J and received word y F nn q, define an indicator variable I(y,u 1,...,u J as follows. I(y,u 1,...,u J = 1 if and 7 We stress tat we do not require tat te G i s ave rank k. 11

12 only if for every 1 j J, wt(u j G y ρnn. Tat is, it captures te bad event tat we want to avoid. Define X C = I(y,u 1,...,u J. good (u 1,...,u J (C out J y F nn q We want to sow tat wit ig probability X C = 0. By Markov s inequality, te teorem would follow if we can sow tat: E[X C ] = E[I(y,u 1,...,u J ] y F nn q good (u 1,...,u J (C out J q Ω(nN. (2 Before we proceed, we need a final bit of notation. For a good tuple (u 1,...,u J and every 1 j J, define T j (u 1,...,u J [N] to be te set of positions i suc tat u j i is F q-independent of te set {u 1 i,...,uj 1 i }. (A subset of F Q is linearly independent over F q if its elements, wen viewed as vectors from F k q (recall tat F q k is isomorpic to F k q are linearly independent over F q. Note tat since te tuple is good, T j (u 1,...,u J (1 R γn. Let = ρnn. Consider te following sequence of inequalities (were below we ave suppressed te dependence of T j on (u 1,...,u J for clarity: E[X C ] = J Pr wt(u j G y (3 G y F nn q y F nn q = y F nn q good (u 1,..,u J (C out J good (u 1,..,u J (C out J good (u 1,..,u J (C out J Pr G J wt Tj (u j G y (4 [ Pr wttj (u i G y ] (5 G In te above (3 follows from te definition of te indicator variable. (4 follows from te simple fact tat for every vector u of lengt N and every T [N], wt T (u [ wt(u. (5 follows from te J ] subsequent argument. By definition of conditional probability, Pr G wt T j (u j G y is [ te same as Pr G wt TJ (u J G y ] [ J 1 wt J 1 ] T j (u j G y Pr G wt T j (u j G y. Now as all symbols corresponding to T J are good symbols, for every i T J, te value of u J i G i is independent of te values of {u 1 i G i,...,ui J 1 G i }. Furter since eac of G 1,...,G N are cosen independently (at random, te event wt TJ (u J G y is independent of te event J 1 wt T j (u j G y. Tus, J Pr wt Tj (u j [ G y = Pr wttj (u J G y ] J 1 Pr wt Tj (u j G y G G G 12

13 Inductively applying te argument above gives (5. Furter (were below we use D to denote (1 R γn, E[X C ] = = y F nn q y F nn q (d 1,...,d J {D,...,N} J (d 1,...,d J {D,...,N} J (d 1,...,d J {D,...,N} J good (u 1,...,u J (C out J (d 1,..,d J {D,..,N} J q nn q NJ(J+1 q nn q NJ(J+1 good (u 1,..,u J (C out J, T 1 =d 1,.., T J =d J ( ( q nd j (1 H q r nd j ( q n T j (1 H q n T j Q max(d j (1 RN+1,0 Q d j (1 R γn ( q nd j+nd j H q nd j ( q nd j (1 H q nd j ( q nd j (1 H q nd j (6 (7 (8 (9 1 (1 R γn N N(J+1 d j Jd j nd j. (10 (6 follows from (5 and Lemma 4. (7 follows from rearranging te summand and using te fact tat te tuple is good (and ence d j (1 R γn. (8 follows from te fact tat tere are q nn coices for y and Lemma 6. 8 (9 follows from te fact tat d j (1 RN + 1 d j (1 R γn (for N 1/γ and tat d j (1 R γn. (10 follows by rearranging te terms. Note tat as long as n J(J + 1, we ave N(J+1 nd sow tat for every (1 R γn d N, ( ( 1 r 1 H 1 q nd N Jd. Now (10 will imply (2 if we can (1 R γn 2N δ, d Jd for δ = ε/3. By Lemma 7 (wic is stated at te end of tis section, as long as J 4c q /(δ2 (1 R (and te conditions on γ are satisfied, te above can be satisfied by picking /(nn = H 1 q (1 rr 3δ = ρ, as required. We now verify tat te conditions on γ in Lemma 7 are satisfied by picking γ = 4 Jr. Note tat if we coose J = 4c q/(δ 2 (1 R, we will ave γ = δ2 (1 R c. Now, as 0 < R < 1, we also qr ave γ δ 2 /(rc q. Finally, we sow tat γ (1 R/2. Indeed γ = δ2 (1 R c q r = ε2 (1 R ε(1 R 9c q r < α q(r(1 R 9r 9r < 1 R, 2 8 As te final code C will be linear, it is sufficient to only look at received words tat ave Hamming weigt at most ρnn. However, tis gives a negligible improvement to te final result and ence, we just bound te number of coices for y by q nn. 13

14 were te first inequality follows from te facts tat c q 1 and ε 1. Te second inequality follows ( from te assumption on ε. Te tird inequality follows from Lemma 2. As J is in Θ (and γ is in Θ(ε 2 (1 R/r, we can coose L = (N/ε 4 O(r2 ε 4 (1 R 2 log(q/r, as required. 1 ε 2 (1 R We still need to argue tat wit ig probability te rate of te code C = C out (Cin 1,...,CN in is rr. One way to argue tis would be to sow tat wit ig probability all of te generator matrices ave full rank. However, tis is not te case: in fact, wit some non-negligible probability at least one of tem will not ave full rank. However, we claim tat wit ig probability C as distance > 0, and tus is a subspace of dimension rrnn. Te proof above in fact implies tat wit ig probability C as distance (Hq 1 (1 rr δnn for any small enoug δ > 0. It is easy to see tat to sow tat C as distance at least, it is enoug to sow tat wit ig probability I(0,m = 0. Note tat tis is a special case of our proof, wit J = 1 and y = 0 and ence, m F K Q wit probability at least 1 q Ω(nN, te code C as large distance. Te proof is tus complete, modulo te following lemma, wic we prove next (following a similar argument in [15]. Lemma 7. Let q 2 be an integer, and 1 n N be integers. Let 0 < r,r 1 be rationals and δ > 0 be a real suc tat R (α q ( (r δ/r and δ α q (r, were α q (r is as defined in (1. Let γ > 0 be a real suc tat γ min 1 R 2, δ2 c, were c qr q is te constant tat depends only on q H 1 q nd 4c q δ 2 (1 R from Lemma 1. Ten for all integers J satisfied. For every integer (1 R γn d N, ( ( 1 r 1 Proof. Using te fact Hq 1 d = (1 R γn: nn and (H 1 q (1 rr 2δnN te following is N(1 R γ d 2N. (11 Jd is an increasing function, (11 is satisfied if for every d d N (were ( ( ( d Hq 1 1 r 1 N N(1 R γ 2N d d. J Define a new variable θ = 1 N(1 R γ/d. Note tat as d = (1 R γn d N, 0 θ R + γ. Also d/n = (1 R γ(1 θ 1. Tus, te above inequality would be satisfied if { ( } (1 R γ nn min (1 θ 1 Hq rθ. 0 θ R+γ (1 R γj Again using te fact tat Hq 1 is an increasing function along wit te fact tat γ (1 R/2, we get tat te above is satisfied if nn (1 R γ min 0 θ R+γ { ( (1 θ 1 Hq 1 1 rθ ( 1 rθ 4 (1 RJ } 4. (1 RJ By Lemma 1, if J 4c q δ 2 (1 R, ten9 Hq 1 0 θ R + γ, (1 R γ(1 θ 1 δ δ, te above equation would be satisfied if 9 We also use te fact tat H 1 q nn (1 R γ min f r,q(θ δ. 0<θ R+γ is increasing. 14 Hq 1 (1 rθ δ. Since for every

15 Note tat by te assumptions γ δ 2 /(rc q δ/r (as δ 1 and c q 1 and R (α q(r δ/r, we ave R + γ α q (r/r. Tus, by using Lemma 3 we get tat (1 R γmin 0<θ R+γ f r,q (θ = Hq 1 (1 rr rγ. By Lemma 1, te facts tat γ δ 2 /(rc q and H 1 q is increasing, we ave Hq 1 (1 rr rγ Hq 1 (1 rr δ. Tis implies tat (11 is satisfied if /(nn Hq 1 (1 rr 2δ, as desired. 5 List Decodability of Random Concatenated Codes In tis section, we will look at te list decodability of concatenated codes wen bot te outer code and te inner codes are (independent random linear codes. Te following is te main result of tis section. Teorem 3. Let q be a prime power and let 0 < r 1 be an arbitrary rational. Let 0 < ε < α q (r be an arbitrary real, were α q (r is as defined in (1, and 0 < R (α q (r ε/r be a rational. Let k,n,k,n 1 be large enoug integers suc tat k = rn and K = RN. Let C out be a random linear code over F q k tat is generated by a random K N matrix over F q k. Let Cin 1,...,CN in be random linear codes over F q, were Cin i is generated by a random k n matrix G i and te random coices for ( C out,g 1,...,G N are all independent. Ten te concatenated code C = C out (Cin 1 (,...,CN in is a Hq 1 (1 Rr ε,q O rn ε 2 (1 R -list decodable code wit probability at least 1 q Ω(nN over te coices of C out,g 1,...,G N. Furter, wit ig probability, C as rate rr. Te intuition beind Teorem 3 is te following. W..p., a random code as a weigt distribution and list recoverability properties very similar to tose of folded Reed-Solomon codes. Tat is, Lemmas 5 and 6 old wp for random C out. However, we will prove Teorem 3 in a sligtly different manner tan te proof of Teorem 2 as it gives a better bound on te list size (see Remark 2 for a more quantitative comparison. In te rest of tis section, we will prove Teorem 3. Define Q = q k. Let L be te worst-case list size tat we are sooting for (we will fix its value at te end. Te first observation is tat any L + 1-tuple of messages (m 1,...,m L+1 (F K Q L+1 contains at least J = log Q (L+1 many messages tat are linearly independent over F Q. Tus, to prove te teorem it suffices to sow tat wit ig probability, no Hamming ball over F nn q of radius (Hq 1 (1 rr εnn contains a J-tuple of codewords (C(m 1,...,C(m J, were m 1,...,m J are linearly independent over F Q. Define ρ = Hq 1 (1 Rr ε. For every J-tuple of linearly independent messages (m 1,...,m J (F K Q J and received word y F nn q, define an indicator random variable I(y,m 1,...,m J as follows. I(y,m 1,...,m J = 1 if and only if for every 1 j J, wt(c(m j y ρnn. Tat is, it captures te bad event tat we want to avoid. Define X C = y F nn q (m 1,...,m J Ind(Q,K,J I(y,m 1,...,m J were Ind(Q,K,J denotes te collection of subsets of F Q -linearly independent vectors from F K Q of size J. We want to sow tat wit ig probability X C = 0. By Markov s inequality, te teorem 15

16 would follow if we can sow tat: E[X C ] = y F nn q (m 1,...,m J Ind(Q,K,J E[I(y,m 1,...,m J ] is at most q Ω(nN. (12 Note tat te number of distinct possibilities for y,m 1,...,m J is upper bounded by q nn Q RNJ = q nn(1+rrj. Fix some arbitrary coice of y,m 1,...,m J. To prove (12, we will sow tat q nn(1+rrj E[I(y,m 1,...,m J ] q Ω(nN. (13 Before we proceed, we need some more notation. Given vectors u 1,...,u J F N Q, we define Z(u 1,...,u J = (Z 1,...,Z N as follows. For every 1 i N, Z i [J] denotes te largest subset suc tat te elements (u j i j Z i are linearly independent over F q (in case of a tie coose te lexically first suc set, were u j = (u j 1,...,uj N. If uj i Z i ten we will call u j i a good symbol. Note tat a good symbol is always non-zero. We will also define anoter partition of all te good symbols, T(u 1,...,u J = (T 1,...,T J by setting T j = {i j Z i } for 1 j J. Since m 1,...,m J are linearly independent over F Q, te corresponding codewords in C out are distributed uniformly and independently in F N Q. In oter words, for any fixed (u1,...,u J (F N Q J, J C out (m j = u j = Q NJ = q rnnj. (14 Pr Cout Recall tat we denote te (random generator matrices for te inner code C i in by G i for every 1 i N. Also note tat every (u 1,...,u J (F N Q J as a unique Z(u 1,...,u J. In oter words, te 2 NJ coices of Z partition te tuples in (F N Q J. Let = ρnn. Consider te following calculation (were te dependence of Z and T on u 1,...,u J ave been suppressed for clarity: E[I(y,m 1,...,m J ] = J Pr G=(G1,...,G N wt(u j G y (15 (u 1,...,u J (F N Q J J Pr Cout C out (m j = u j J = q rnnj Pr G=(G1,...,G N wt(u j G y (16 (u 1,...,u J (F N Q J J q rnnj Pr G=(G1,...,G N wt Tj (u j G y (17 (u 1,...,u J (F N Q J = q rnnj (u 1,...,u J (F N Q J 16 [ Pr G wttj (u j G y ] (18

17 In te above (15 follows from te fact tat te (random coices for C out and G = (G 1,...,G N are all independent. (16 follows from (14. (17 follows from te simple fact tat for every y (F n q N and T [N], wt T (y wt(y. (18 follows from te same argument used to prove (5. Furter, E[I(y,m 1,...,m J ] = = = (u 1,...,u J (F N Q J (d 1,...,d J {0,...,N} J (d 1,...,d J {0,...,N} J (d 1,...,d J {0,...,N} J q rnn [ Pr G wttj (u j G y ] (19 (u 1,...,u J (F N Q J, ( T 1 =d 1,..., T J =d J q JN+(rn+J J d j, T j =d j, T j =d j [ Pr G wttj (u j G y ] q rnn (20 Pr G [ wttj (u j G y ] Pr G [ wttj (u j G y ] q n ( r(d j N Jd j n N n q rnn (21 (22 In te above (19, (20, (22 follow from rearranging and grouping te summands. (21 uses te following argument. Given a fixed Z = (Z 1,...,Z N, te number of tuples (u 1,...,u J suc tat Z(u 1,...,u J = Z is at most U = N i=1 q Z i k q Z i (J Z i, were te q Zi k is an upper bound on te number of Z i linearly independent vectors from F k q and q Z i (J Z i follows from te fact tat every bad symbol {u j i } j Z i as to take a value tat is a linear combination of te symbols {u j i } j Z i. Now U N i=1 q Zi (k+j = q (k+j N i=1 Zi = q (k+j J Tj. Finally, recall tat tere are 2 JN q JN distinct coices for Z. (22 implies te following q nn(1+rrj E[I(y,m 1,...,m J ] (d 1,...,d J {0,...,N} J E j were ( E j = q n r(d j N(1 R N J Jd j n N [ n Pr G wttj (u j G y ]. We now proceed to upper bound E j by q Ω(nN/J for every 1 j J. Note tat tis will imply te claimed result as tere are at most (N + 1 J = q o(nn coices for different values of d j s. We first start wit te case wen d j < d, were d = N(1 R γ, for some parameter 0 < γ < 1 R to be defined later (note tat we did not ave to deal wit tis case in te proof of Teorem 2. In tis case we use te fact tat Pr G [ wttj (u j G y ] 1. 17

18 Tus, we would be done if we can sow tat 1 N ( r (d j N(1 R + N J + Jd i n + N n δ < 0, for some δ > 0 tat we will coose soon. Te above would be satisfied if d j N < (1 R 1 ( 1 r J + Jd j nn + 1 δ n r, ( 1 J + Jd j nn + 1 n + δ r as d j < d. Note tat if n > J ( Jdj N + 1 wic is satisfied if we coose γ > 2 r and if we set δ = 1 4 J, it is enoug to coose γ = Jr. We now turn our attention to te case wen d j d. Te arguments are very similar to tose employed in te proof of Teorem 2. In tis case, by Lemma 4 we ave E j q nd j ( ( (1 H q r 1 N(1 R N nd j d j d j J J n N nd j. Te above implies tat we can sow tat E j is q Ω(nN(1 R γ provided we sow tat for every d d N, ( ( N(1 R H 1 q 1 r 1 N nd d dj J n N δ, nd for δ = ε/3. Now if n 2J 2, ten bot J n N 2Jd and N nd N 2Jd. In oter words, J n + N nd N Jd. Using te fact tat Hq 1 is increasing, te above is satisfied if ( ( N(1 R γ H 1 q 1 r 1 2N δ, nd d dj As in te proof of Teorem 2, as long as J 4c q/(δ 2 (1 R, by Lemma 7 te above can be satisfied by picking nn = H 1 q (1 rr 3δ = ρ, as required. ( Note tat J = O 1 (1 Rε 2, wic implies L = Q O(1/((1 Rε2 as claimed in te statement of te teorem. Again using te same argument used in te proof of Teorem 2, it can be sown tat wit ig probability te rate of te code C = C out (Cin 1,...,CN in is rr. Te proof is complete. Remark 2. Te proof of Teorem 3 does not use te list recoverability property of te outer code directly. Te idea of using list recoverability to argue independence can also be used to prove Teorem 3. Tat is, first sow tat wit good probability, a random linear outer code will ave good list recoverability. Ten te argument in previous section can be used to prove Teorem 3. However, tis gives worse parameters tan te proof above. In particular, by a straigtforward application of te probabilistic metod, one can sow tat a random linear code of rate R over F Q is (R + γ,l,q l/γ -list recoverable [4, Sec 9.3.2]. In proof of Teorem 2, l is rougly q J, were J is rougly 1/ε 2. Tus, if we used te arguments in te proof of Teorem 2, we would be able to prove Teorem 3 but wit lists of size of Q qo (ε 2 (1 R 1, wic is worse tan te list size of Q O(ε 2 (1 R 1 guaranteed by Teorem 3. 18

19 Remark 3. In a typical use of concatenated codes, te block lengts of te inner and outer codes satisfy n = Θ(log N, in wic case te concatenated code of Teorem 3 is list decodable wit lists of size N O(ε 2 (1 R 1. However, te proof of Teorem 3 also works wit smaller n. In particular as long as n is at least 3J 2, te proof of Teorem 3 goes troug. Tus, wit n in Θ(J 2, one can get concatenated codes tat are list decodable up to te list-decoding capacity wit lists of size q O(ε 6 (1 R 3. 6 Open Questions In tis work, we ave sown tat te family of concatenated codes is ric enoug to contain codes tat acieve te list-decoding capacity. But realizing te full potential of concatenated codes and acieving capacity (or even substantially improving upon te Blok-Zyablov bound [8] wit explicit codes and polynomial time decoding remains a uge callenge. Acieving an explicit construction even witout te requirement of an efficient list-decoding algoritm (but only good combinatorial list-decodability properties is itself wide open. Te difficulty wit explicit constructions is tat we do not ave any andle on te structure of inner codes tat lead to concatenated codes wit te required properties. In fact, we do not know of any efficient algoritm to even verify tat a given set of inner codes will work, so even a Las Vegas construction appears difficult (a similar situation olds for binary codes meeting te Gilbert-Varsamov trade-off between rate and relative distance. References [1] E. L. Blok and Victor V. Zyablov. Existence of linear concatenated binary codes wit optimal correcting properties. Prob. Peredaci Inform., 9:3 10, [2] Ilya I. Dumer. Concatenated codes and teir multilevel generalizations. In V. S. Pless and W. C. Huffman, editors, Handbook of Coding Teory, volume 2, pages Nort Holland, [3] G. David Forney. Concatenated Codes. MIT Press, Cambridge, MA, [4] Venkatesan Guruswami. List decoding of error-correcting codes. Number 3282 in Lecture Notes in Computer Science. Springer, (Winning Tesis of te 2002 ACM Doctoral Dissertation Competition. [5] Venkatesan Guruswami, Joan Hastad, Madu Sudan, and David Zuckerman. Combinatorial bounds for list decoding. IEEE Transactions on Information Teory, 48(5: , [6] Venkatesan Guruswami and Piotr Indyk. Expander-based constructions of efficiently decodable codes. In Proceedings of te 42nd Annual IEEE Symposium on Foundations of Computer Science, pages , [7] Venkatesan Guruswami and Atri Rudra. Limits to list decoding Reed-Solomon codes. IEEE Transactions on Information Teory, 52(8: , August

20 [8] Venkatesan Guruswami and Atri Rudra. Better binary list-decodable codes via multilevel concatenation. In Proceedings of te 11t International Worksop on Randomization and Computation (RANDOM, pages , [9] Venkatesan Guruswami and Atri Rudra. Explicit codes acieving list decoding capacity: Errorcorrection wit optimal redundancy. IEEE Transactions on Information Teory, 54(1: , January [10] F. J. MacWilliams and Neil J. A. Sloane. Te Teory of Error-Correcting Codes. Elsevier/Nort-Holland, Amsterdam, [11] Jørn Justesen. A class of constructive asymptotically good algebraic codes. IEEE Transactions on Information Teory, 18: , [12] Atri Rudra. List Decoding and Property Testing of Error Correcting Codes. PD tesis, University of Wasington, [13] Micael Sipser and Daniel Spielman. Expander codes. IEEE Transactions on Information Teory, 42(6: , [14] Daniel Spielman. Te complexity of error-correcting codes. In Proceedings of te 11t International Symposium on Fundamentals of Computation Teory, LNCS #1279, pages 67 84, [15] Cristian Tommesen. Te existence of binary linear concatenated codes wit Reed-Solomon outer codes wic asymptotically meet te Gilbert-Varsamov bound. IEEE Transactions on Information Teory, 29(6: , November A Proof of Lemma 3 Proof. Te proof follows from te subsequent geometric interpretations of f x,q ( and α q (. See Figure 1 for a pictorial illustration of te arguments used in tis proof (for q = 2. First, we claim tat for any 0 z 0 1, α q (z 0 satisfies te following property: te line segment between (α q (z 0,Hq 1 (1 α q (z 0 and (z 0,0 is tangent to te curve Hq 1 (1 z at α q (z 0. Tus, we need to sow tat One can ceck tat (Hq 1 1 (1 x = Hq 1 (1 α q (z 0 = (Hq 1 (1 α q (z 0. (23 z 0 α q (z 0 = 1 H q (H 1 q (1 x log q (q 1 log q (Hq 1 (1 x+log q (1 Hq 1 z 0 α q (z 0 = z (1 q z 0 1 log q (q 1 (1 q z 0 1 log q (1 q z 0 1 q z 0 1 (z 0 1 = (1 q z 0 1 (log q (q 1 log q (1 q z z 0 1 = H 1 q (1 x. Now, (1 α q (z 0 (log q (q 1 log q (Hq 1 (1 α q (z 0 + log q (1 Hq 1 (1 α q (z 0 = H 1 q (1 α q (z 0 (Hq 1 (1 α q (z 0, 20

Complexity of Decoding Positive-Rate Reed-Solomon Codes

Complexity of Decoding Positive-Rate Reed-Solomon Codes Complexity of Decoding Positive-Rate Reed-Solomon Codes Qi Ceng 1 and Daqing Wan 1 Scool of Computer Science Te University of Oklaoma Norman, OK73019 Email: qceng@cs.ou.edu Department of Matematics University

More information

Complexity of Decoding Positive-Rate Primitive Reed-Solomon Codes

Complexity of Decoding Positive-Rate Primitive Reed-Solomon Codes 1 Complexity of Decoding Positive-Rate Primitive Reed-Solomon Codes Qi Ceng and Daqing Wan Abstract It as been proved tat te maximum likeliood decoding problem of Reed-Solomon codes is NP-ard. However,

More information

Explicit Interleavers for a Repeat Accumulate Accumulate (RAA) code construction

Explicit Interleavers for a Repeat Accumulate Accumulate (RAA) code construction Eplicit Interleavers for a Repeat Accumulate Accumulate RAA code construction Venkatesan Gurusami Computer Science and Engineering University of Wasington Seattle, WA 98195, USA Email: venkat@csasingtonedu

More information

Limits to List Decoding Random Codes

Limits to List Decoding Random Codes Limits to List Decoding Random Codes Atri Rudra Department of Computer Science and Engineering, University at Buffalo, The State University of New York, Buffalo, NY, 14620. atri@cse.buffalo.edu Abstract

More information

Volume 29, Issue 3. Existence of competitive equilibrium in economies with multi-member households

Volume 29, Issue 3. Existence of competitive equilibrium in economies with multi-member households Volume 29, Issue 3 Existence of competitive equilibrium in economies wit multi-member ouseolds Noriisa Sato Graduate Scool of Economics, Waseda University Abstract Tis paper focuses on te existence of

More information

1 The concept of limits (p.217 p.229, p.242 p.249, p.255 p.256) 1.1 Limits Consider the function determined by the formula 3. x since at this point

1 The concept of limits (p.217 p.229, p.242 p.249, p.255 p.256) 1.1 Limits Consider the function determined by the formula 3. x since at this point MA00 Capter 6 Calculus and Basic Linear Algebra I Limits, Continuity and Differentiability Te concept of its (p.7 p.9, p.4 p.49, p.55 p.56). Limits Consider te function determined by te formula f Note

More information

1 Calculus. 1.1 Gradients and the Derivative. Q f(x+h) f(x)

1 Calculus. 1.1 Gradients and the Derivative. Q f(x+h) f(x) Calculus. Gradients and te Derivative Q f(x+) δy P T δx R f(x) 0 x x+ Let P (x, f(x)) and Q(x+, f(x+)) denote two points on te curve of te function y = f(x) and let R denote te point of intersection of

More information

Lecture 4: Codes based on Concatenation

Lecture 4: Codes based on Concatenation Lecture 4: Codes based on Concatenation Error-Correcting Codes (Spring 206) Rutgers University Swastik Kopparty Scribe: Aditya Potukuchi and Meng-Tsung Tsai Overview In the last lecture, we studied codes

More information

Lecture 6: Expander Codes

Lecture 6: Expander Codes CS369E: Expanders May 2 & 9, 2005 Lecturer: Prahladh Harsha Lecture 6: Expander Codes Scribe: Hovav Shacham In today s lecture, we will discuss the application of expander graphs to error-correcting codes.

More information

MA455 Manifolds Solutions 1 May 2008

MA455 Manifolds Solutions 1 May 2008 MA455 Manifolds Solutions 1 May 2008 1. (i) Given real numbers a < b, find a diffeomorpism (a, b) R. Solution: For example first map (a, b) to (0, π/2) and ten map (0, π/2) diffeomorpically to R using

More information

Efficient algorithms for for clone items detection

Efficient algorithms for for clone items detection Efficient algoritms for for clone items detection Raoul Medina, Caroline Noyer, and Olivier Raynaud Raoul Medina, Caroline Noyer and Olivier Raynaud LIMOS - Université Blaise Pascal, Campus universitaire

More information

Exam 1 Review Solutions

Exam 1 Review Solutions Exam Review Solutions Please also review te old quizzes, and be sure tat you understand te omework problems. General notes: () Always give an algebraic reason for your answer (graps are not sufficient),

More information

HOMEWORK HELP 2 FOR MATH 151

HOMEWORK HELP 2 FOR MATH 151 HOMEWORK HELP 2 FOR MATH 151 Here we go; te second round of omework elp. If tere are oters you would like to see, let me know! 2.4, 43 and 44 At wat points are te functions f(x) and g(x) = xf(x)continuous,

More information

A = h w (1) Error Analysis Physics 141

A = h w (1) Error Analysis Physics 141 Introduction In all brances of pysical science and engineering one deals constantly wit numbers wic results more or less directly from experimental observations. Experimental observations always ave inaccuracies.

More information

Notes 10: List Decoding Reed-Solomon Codes and Concatenated codes

Notes 10: List Decoding Reed-Solomon Codes and Concatenated codes Introduction to Coding Theory CMU: Spring 010 Notes 10: List Decoding Reed-Solomon Codes and Concatenated codes April 010 Lecturer: Venkatesan Guruswami Scribe: Venkat Guruswami & Ali Kemal Sinop DRAFT

More information

On convexity of polynomial paths and generalized majorizations

On convexity of polynomial paths and generalized majorizations On convexity of polynomial pats and generalized majorizations Marija Dodig Centro de Estruturas Lineares e Combinatórias, CELC, Universidade de Lisboa, Av. Prof. Gama Pinto 2, 1649-003 Lisboa, Portugal

More information

Poisson Equation in Sobolev Spaces

Poisson Equation in Sobolev Spaces Poisson Equation in Sobolev Spaces OcMountain Dayligt Time. 6, 011 Today we discuss te Poisson equation in Sobolev spaces. It s existence, uniqueness, and regularity. Weak Solution. u = f in, u = g on

More information

Lecture 4: Proof of Shannon s theorem and an explicit code

Lecture 4: Proof of Shannon s theorem and an explicit code CSE 533: Error-Correcting Codes (Autumn 006 Lecture 4: Proof of Shannon s theorem and an explicit code October 11, 006 Lecturer: Venkatesan Guruswami Scribe: Atri Rudra 1 Overview Last lecture we stated

More information

Copyright c 2008 Kevin Long

Copyright c 2008 Kevin Long Lecture 4 Numerical solution of initial value problems Te metods you ve learned so far ave obtained closed-form solutions to initial value problems. A closedform solution is an explicit algebriac formula

More information

2.8 The Derivative as a Function

2.8 The Derivative as a Function .8 Te Derivative as a Function Typically, we can find te derivative of a function f at many points of its domain: Definition. Suppose tat f is a function wic is differentiable at every point of an open

More information

Lecture 15. Interpolation II. 2 Piecewise polynomial interpolation Hermite splines

Lecture 15. Interpolation II. 2 Piecewise polynomial interpolation Hermite splines Lecture 5 Interpolation II Introduction In te previous lecture we focused primarily on polynomial interpolation of a set of n points. A difficulty we observed is tat wen n is large, our polynomial as to

More information

Pre-Calculus Review Preemptive Strike

Pre-Calculus Review Preemptive Strike Pre-Calculus Review Preemptive Strike Attaced are some notes and one assignment wit tree parts. Tese are due on te day tat we start te pre-calculus review. I strongly suggest reading troug te notes torougly

More information

Differentiation in higher dimensions

Differentiation in higher dimensions Capter 2 Differentiation in iger dimensions 2.1 Te Total Derivative Recall tat if f : R R is a 1-variable function, and a R, we say tat f is differentiable at x = a if and only if te ratio f(a+) f(a) tends

More information

Symmetry Labeling of Molecular Energies

Symmetry Labeling of Molecular Energies Capter 7. Symmetry Labeling of Molecular Energies Notes: Most of te material presented in tis capter is taken from Bunker and Jensen 1998, Cap. 6, and Bunker and Jensen 2005, Cap. 7. 7.1 Hamiltonian Symmetry

More information

On the List-Decodability of Random Linear Codes

On the List-Decodability of Random Linear Codes On the List-Decodability of Random Linear Codes Venkatesan Guruswami Computer Science Dept. Carnegie Mellon University Johan Håstad School of Computer Science and Communication KTH Swastik Kopparty CSAIL

More information

Exercises for numerical differentiation. Øyvind Ryan

Exercises for numerical differentiation. Øyvind Ryan Exercises for numerical differentiation Øyvind Ryan February 25, 2013 1. Mark eac of te following statements as true or false. a. Wen we use te approximation f (a) (f (a +) f (a))/ on a computer, we can

More information

Polynomial Interpolation

Polynomial Interpolation Capter 4 Polynomial Interpolation In tis capter, we consider te important problem of approximatinga function fx, wose values at a set of distinct points x, x, x,, x n are known, by a polynomial P x suc

More information

The derivative function

The derivative function Roberto s Notes on Differential Calculus Capter : Definition of derivative Section Te derivative function Wat you need to know already: f is at a point on its grap and ow to compute it. Wat te derivative

More information

Recall from our discussion of continuity in lecture a function is continuous at a point x = a if and only if

Recall from our discussion of continuity in lecture a function is continuous at a point x = a if and only if Computational Aspects of its. Keeping te simple simple. Recall by elementary functions we mean :Polynomials (including linear and quadratic equations) Eponentials Logaritms Trig Functions Rational Functions

More information

SECTION 3.2: DERIVATIVE FUNCTIONS and DIFFERENTIABILITY

SECTION 3.2: DERIVATIVE FUNCTIONS and DIFFERENTIABILITY (Section 3.2: Derivative Functions and Differentiability) 3.2.1 SECTION 3.2: DERIVATIVE FUNCTIONS and DIFFERENTIABILITY LEARNING OBJECTIVES Know, understand, and apply te Limit Definition of te Derivative

More information

Polynomial Interpolation

Polynomial Interpolation Capter 4 Polynomial Interpolation In tis capter, we consider te important problem of approximating a function f(x, wose values at a set of distinct points x, x, x 2,,x n are known, by a polynomial P (x

More information

How to Find the Derivative of a Function: Calculus 1

How to Find the Derivative of a Function: Calculus 1 Introduction How to Find te Derivative of a Function: Calculus 1 Calculus is not an easy matematics course Te fact tat you ave enrolled in suc a difficult subject indicates tat you are interested in te

More information

Chapter 5 FINITE DIFFERENCE METHOD (FDM)

Chapter 5 FINITE DIFFERENCE METHOD (FDM) MEE7 Computer Modeling Tecniques in Engineering Capter 5 FINITE DIFFERENCE METHOD (FDM) 5. Introduction to FDM Te finite difference tecniques are based upon approximations wic permit replacing differential

More information

A Combinatorial Bound on the List Size

A Combinatorial Bound on the List Size 1 A Combinatorial Bound on the List Size Yuval Cassuto and Jehoshua Bruck California Institute of Technology Electrical Engineering Department MC 136-93 Pasadena, CA 9115, U.S.A. E-mail: {ycassuto,bruck}@paradise.caltech.edu

More information

3.4 Worksheet: Proof of the Chain Rule NAME

3.4 Worksheet: Proof of the Chain Rule NAME Mat 1170 3.4 Workseet: Proof of te Cain Rule NAME Te Cain Rule So far we are able to differentiate all types of functions. For example: polynomials, rational, root, and trigonometric functions. We are

More information

Generic maximum nullity of a graph

Generic maximum nullity of a graph Generic maximum nullity of a grap Leslie Hogben Bryan Sader Marc 5, 2008 Abstract For a grap G of order n, te maximum nullity of G is defined to be te largest possible nullity over all real symmetric n

More information

Introduction to Derivatives

Introduction to Derivatives Introduction to Derivatives 5-Minute Review: Instantaneous Rates and Tangent Slope Recall te analogy tat we developed earlier First we saw tat te secant slope of te line troug te two points (a, f (a))

More information

Solving Continuous Linear Least-Squares Problems by Iterated Projection

Solving Continuous Linear Least-Squares Problems by Iterated Projection Solving Continuous Linear Least-Squares Problems by Iterated Projection by Ral Juengling Department o Computer Science, Portland State University PO Box 75 Portland, OR 977 USA Email: juenglin@cs.pdx.edu

More information

Phase space in classical physics

Phase space in classical physics Pase space in classical pysics Quantum mecanically, we can actually COU te number of microstates consistent wit a given macrostate, specified (for example) by te total energy. In general, eac microstate

More information

lecture 26: Richardson extrapolation

lecture 26: Richardson extrapolation 43 lecture 26: Ricardson extrapolation 35 Ricardson extrapolation, Romberg integration Trougout numerical analysis, one encounters procedures tat apply some simple approximation (eg, linear interpolation)

More information

Lecture 19: Elias-Bassalygo Bound

Lecture 19: Elias-Bassalygo Bound Error Correcting Codes: Combinatorics, Algorithms and Applications (Fall 2007) Lecturer: Atri Rudra Lecture 19: Elias-Bassalygo Bound October 10, 2007 Scribe: Michael Pfetsch & Atri Rudra In the last lecture,

More information

Homework 1 Due: Wednesday, September 28, 2016

Homework 1 Due: Wednesday, September 28, 2016 0-704 Information Processing and Learning Fall 06 Homework Due: Wednesday, September 8, 06 Notes: For positive integers k, [k] := {,..., k} denotes te set of te first k positive integers. Wen p and Y q

More information

Continuity and Differentiability Worksheet

Continuity and Differentiability Worksheet Continuity and Differentiability Workseet (Be sure tat you can also do te grapical eercises from te tet- Tese were not included below! Typical problems are like problems -3, p. 6; -3, p. 7; 33-34, p. 7;

More information

Sin, Cos and All That

Sin, Cos and All That Sin, Cos and All Tat James K. Peterson Department of Biological Sciences and Department of Matematical Sciences Clemson University Marc 9, 2017 Outline Sin, Cos and all tat! A New Power Rule Derivatives

More information

Material for Difference Quotient

Material for Difference Quotient Material for Difference Quotient Prepared by Stepanie Quintal, graduate student and Marvin Stick, professor Dept. of Matematical Sciences, UMass Lowell Summer 05 Preface Te following difference quotient

More information

Mathematics 5 Worksheet 11 Geometry, Tangency, and the Derivative

Mathematics 5 Worksheet 11 Geometry, Tangency, and the Derivative Matematics 5 Workseet 11 Geometry, Tangency, and te Derivative Problem 1. Find te equation of a line wit slope m tat intersects te point (3, 9). Solution. Te equation for a line passing troug a point (x

More information

The Complexity of Computing the MCD-Estimator

The Complexity of Computing the MCD-Estimator Te Complexity of Computing te MCD-Estimator Torsten Bernolt Lerstul Informatik 2 Universität Dortmund, Germany torstenbernolt@uni-dortmundde Paul Fiscer IMM, Danisc Tecnical University Kongens Lyngby,

More information

ERROR BOUNDS FOR THE METHODS OF GLIMM, GODUNOV AND LEVEQUE BRADLEY J. LUCIER*

ERROR BOUNDS FOR THE METHODS OF GLIMM, GODUNOV AND LEVEQUE BRADLEY J. LUCIER* EO BOUNDS FO THE METHODS OF GLIMM, GODUNOV AND LEVEQUE BADLEY J. LUCIE* Abstract. Te expected error in L ) attimet for Glimm s sceme wen applied to a scalar conservation law is bounded by + 2 ) ) /2 T

More information

Continuity and Differentiability of the Trigonometric Functions

Continuity and Differentiability of the Trigonometric Functions [Te basis for te following work will be te definition of te trigonometric functions as ratios of te sides of a triangle inscribed in a circle; in particular, te sine of an angle will be defined to be te

More information

Solutions to the Multivariable Calculus and Linear Algebra problems on the Comprehensive Examination of January 31, 2014

Solutions to the Multivariable Calculus and Linear Algebra problems on the Comprehensive Examination of January 31, 2014 Solutions to te Multivariable Calculus and Linear Algebra problems on te Compreensive Examination of January 3, 24 Tere are 9 problems ( points eac, totaling 9 points) on tis portion of te examination.

More information

ch (for some fixed positive number c) reaching c

ch (for some fixed positive number c) reaching c GSTF Journal of Matematics Statistics and Operations Researc (JMSOR) Vol. No. September 05 DOI 0.60/s4086-05-000-z Nonlinear Piecewise-defined Difference Equations wit Reciprocal and Cubic Terms Ramadan

More information

Bridging Shannon and Hamming: List Error-Correction with Optimal Rate

Bridging Shannon and Hamming: List Error-Correction with Optimal Rate Proceedings of the International Congress of Mathematicians Hyderabad, India, 2010 Bridging Shannon and Hamming: List Error-Correction with Optimal Rate Venkatesan Guruswami Abstract. Error-correcting

More information

Linear time list recovery via expander codes

Linear time list recovery via expander codes Linear time list recovery via expander codes Brett Hemenway and Mary Wootters June 7 26 Outline Introduction List recovery Expander codes List recovery of expander codes Conclusion Our Results One slide

More information

A SHORT INTRODUCTION TO BANACH LATTICES AND

A SHORT INTRODUCTION TO BANACH LATTICES AND CHAPTER A SHORT INTRODUCTION TO BANACH LATTICES AND POSITIVE OPERATORS In tis capter we give a brief introduction to Banac lattices and positive operators. Most results of tis capter can be found, e.g.,

More information

WHEN GENERALIZED SUMSETS ARE DIFFERENCE DOMINATED

WHEN GENERALIZED SUMSETS ARE DIFFERENCE DOMINATED WHEN GENERALIZED SUMSETS ARE DIFFERENCE DOMINATED VIRGINIA HOGAN AND STEVEN J. MILLER Abstract. We study te relationsip between te number of minus signs in a generalized sumset, A + + A A, and its cardinality;

More information

Regularized Regression

Regularized Regression Regularized Regression David M. Blei Columbia University December 5, 205 Modern regression problems are ig dimensional, wic means tat te number of covariates p is large. In practice statisticians regularize

More information

4. The slope of the line 2x 7y = 8 is (a) 2/7 (b) 7/2 (c) 2 (d) 2/7 (e) None of these.

4. The slope of the line 2x 7y = 8 is (a) 2/7 (b) 7/2 (c) 2 (d) 2/7 (e) None of these. Mat 11. Test Form N Fall 016 Name. Instructions. Te first eleven problems are wort points eac. Te last six problems are wort 5 points eac. For te last six problems, you must use relevant metods of algebra

More information

Analytic Functions. Differentiable Functions of a Complex Variable

Analytic Functions. Differentiable Functions of a Complex Variable Analytic Functions Differentiable Functions of a Complex Variable In tis capter, we sall generalize te ideas for polynomials power series of a complex variable we developed in te previous capter to general

More information

Numerical Experiments Using MATLAB: Superconvergence of Nonconforming Finite Element Approximation for Second-Order Elliptic Problems

Numerical Experiments Using MATLAB: Superconvergence of Nonconforming Finite Element Approximation for Second-Order Elliptic Problems Applied Matematics, 06, 7, 74-8 ttp://wwwscirporg/journal/am ISSN Online: 5-7393 ISSN Print: 5-7385 Numerical Experiments Using MATLAB: Superconvergence of Nonconforming Finite Element Approximation for

More information

Improved Algorithms for Largest Cardinality 2-Interval Pattern Problem

Improved Algorithms for Largest Cardinality 2-Interval Pattern Problem Journal of Combinatorial Optimization manuscript No. (will be inserted by te editor) Improved Algoritms for Largest Cardinality 2-Interval Pattern Problem Erdong Cen, Linji Yang, Hao Yuan Department of

More information

Consider a function f we ll specify which assumptions we need to make about it in a minute. Let us reformulate the integral. 1 f(x) dx.

Consider a function f we ll specify which assumptions we need to make about it in a minute. Let us reformulate the integral. 1 f(x) dx. Capter 2 Integrals as sums and derivatives as differences We now switc to te simplest metods for integrating or differentiating a function from its function samples. A careful study of Taylor expansions

More information

Lecture 28: Generalized Minimum Distance Decoding

Lecture 28: Generalized Minimum Distance Decoding Error Correcting Codes: Combinatorics, Algorithms and Applications (Fall 007) Lecture 8: Generalized Minimum Distance Decoding November 5, 007 Lecturer: Atri Rudra Scribe: Sandipan Kundu & Atri Rudra 1

More information

Section 15.6 Directional Derivatives and the Gradient Vector

Section 15.6 Directional Derivatives and the Gradient Vector Section 15.6 Directional Derivatives and te Gradient Vector Finding rates of cange in different directions Recall tat wen we first started considering derivatives of functions of more tan one variable,

More information

2011 Fermat Contest (Grade 11)

2011 Fermat Contest (Grade 11) Te CENTRE for EDUCATION in MATHEMATICS and COMPUTING 011 Fermat Contest (Grade 11) Tursday, February 4, 011 Solutions 010 Centre for Education in Matematics and Computing 011 Fermat Contest Solutions Page

More information

Continuity. Example 1

Continuity. Example 1 Continuity MATH 1003 Calculus and Linear Algebra (Lecture 13.5) Maoseng Xiong Department of Matematics, HKUST A function f : (a, b) R is continuous at a point c (a, b) if 1. x c f (x) exists, 2. f (c)

More information

DIGRAPHS FROM POWERS MODULO p

DIGRAPHS FROM POWERS MODULO p DIGRAPHS FROM POWERS MODULO p Caroline Luceta Box 111 GCC, 100 Campus Drive, Grove City PA 1617 USA Eli Miller PO Box 410, Sumneytown, PA 18084 USA Clifford Reiter Department of Matematics, Lafayette College,

More information

Bayesian ML Sequence Detection for ISI Channels

Bayesian ML Sequence Detection for ISI Channels Bayesian ML Sequence Detection for ISI Cannels Jill K. Nelson Department of Electrical and Computer Engineering George Mason University Fairfax, VA 030 Email: jnelson@gmu.edu Andrew C. Singer Department

More information

Lecture 19 : Reed-Muller, Concatenation Codes & Decoding problem

Lecture 19 : Reed-Muller, Concatenation Codes & Decoding problem IITM-CS6845: Theory Toolkit February 08, 2012 Lecture 19 : Reed-Muller, Concatenation Codes & Decoding problem Lecturer: Jayalal Sarma Scribe: Dinesh K Theme: Error correcting codes In the previous lecture,

More information

WYSE Academic Challenge 2004 Sectional Mathematics Solution Set

WYSE Academic Challenge 2004 Sectional Mathematics Solution Set WYSE Academic Callenge 00 Sectional Matematics Solution Set. Answer: B. Since te equation can be written in te form x + y, we ave a major 5 semi-axis of lengt 5 and minor semi-axis of lengt. Tis means

More information

Function Composition and Chain Rules

Function Composition and Chain Rules Function Composition and s James K. Peterson Department of Biological Sciences and Department of Matematical Sciences Clemson University Marc 8, 2017 Outline 1 Function Composition and Continuity 2 Function

More information

Order of Accuracy. ũ h u Ch p, (1)

Order of Accuracy. ũ h u Ch p, (1) Order of Accuracy 1 Terminology We consider a numerical approximation of an exact value u. Te approximation depends on a small parameter, wic can be for instance te grid size or time step in a numerical

More information

2.1 THE DEFINITION OF DERIVATIVE

2.1 THE DEFINITION OF DERIVATIVE 2.1 Te Derivative Contemporary Calculus 2.1 THE DEFINITION OF DERIVATIVE 1 Te grapical idea of a slope of a tangent line is very useful, but for some uses we need a more algebraic definition of te derivative

More information

. If lim. x 2 x 1. f(x+h) f(x)

. If lim. x 2 x 1. f(x+h) f(x) Review of Differential Calculus Wen te value of one variable y is uniquely determined by te value of anoter variable x, ten te relationsip between x and y is described by a function f tat assigns a value

More information

Lecture 10: Carnot theorem

Lecture 10: Carnot theorem ecture 0: Carnot teorem Feb 7, 005 Equivalence of Kelvin and Clausius formulations ast time we learned tat te Second aw can be formulated in two ways. e Kelvin formulation: No process is possible wose

More information

1 Proving the Fundamental Theorem of Statistical Learning

1 Proving the Fundamental Theorem of Statistical Learning THEORETICAL MACHINE LEARNING COS 5 LECTURE #7 APRIL 5, 6 LECTURER: ELAD HAZAN NAME: FERMI MA ANDDANIEL SUO oving te Fundaental Teore of Statistical Learning In tis section, we prove te following: Teore.

More information

Lecture 3: Error Correcting Codes

Lecture 3: Error Correcting Codes CS 880: Pseudorandomness and Derandomization 1/30/2013 Lecture 3: Error Correcting Codes Instructors: Holger Dell and Dieter van Melkebeek Scribe: Xi Wu In this lecture we review some background on error

More information

Near-Optimal conversion of Hardness into Pseudo-Randomness

Near-Optimal conversion of Hardness into Pseudo-Randomness Near-Optimal conversion of Hardness into Pseudo-Randomness Russell Impagliazzo Computer Science and Engineering UC, San Diego 9500 Gilman Drive La Jolla, CA 92093-0114 russell@cs.ucsd.edu Ronen Saltiel

More information

MAT 145. Type of Calculator Used TI-89 Titanium 100 points Score 100 possible points

MAT 145. Type of Calculator Used TI-89 Titanium 100 points Score 100 possible points MAT 15 Test #2 Name Solution Guide Type of Calculator Used TI-89 Titanium 100 points Score 100 possible points Use te grap of a function sown ere as you respond to questions 1 to 8. 1. lim f (x) 0 2. lim

More information

Polynomials 3: Powers of x 0 + h

Polynomials 3: Powers of x 0 + h near small binomial Capter 17 Polynomials 3: Powers of + Wile it is easy to compute wit powers of a counting-numerator, it is a lot more difficult to compute wit powers of a decimal-numerator. EXAMPLE

More information

Cubic Functions: Local Analysis

Cubic Functions: Local Analysis Cubic function cubing coefficient Capter 13 Cubic Functions: Local Analysis Input-Output Pairs, 378 Normalized Input-Output Rule, 380 Local I-O Rule Near, 382 Local Grap Near, 384 Types of Local Graps

More information

Decoding Concatenated Codes using Soft Information

Decoding Concatenated Codes using Soft Information Decoding Concatenated Codes using Soft Information Venkatesan Guruswami University of California at Berkeley Computer Science Division Berkeley, CA 94720. venkat@lcs.mit.edu Madhu Sudan MIT Laboratory

More information

Functions of the Complex Variable z

Functions of the Complex Variable z Capter 2 Functions of te Complex Variable z Introduction We wis to examine te notion of a function of z were z is a complex variable. To be sure, a complex variable can be viewed as noting but a pair of

More information

REVIEW LAB ANSWER KEY

REVIEW LAB ANSWER KEY REVIEW LAB ANSWER KEY. Witout using SN, find te derivative of eac of te following (you do not need to simplify your answers): a. f x 3x 3 5x x 6 f x 3 3x 5 x 0 b. g x 4 x x x notice te trick ere! x x g

More information

NUMERICAL DIFFERENTIATION. James T. Smith San Francisco State University. In calculus classes, you compute derivatives algebraically: for example,

NUMERICAL DIFFERENTIATION. James T. Smith San Francisco State University. In calculus classes, you compute derivatives algebraically: for example, NUMERICAL DIFFERENTIATION James T Smit San Francisco State University In calculus classes, you compute derivatives algebraically: for example, f( x) = x + x f ( x) = x x Tis tecnique requires your knowing

More information

5 Ordinary Differential Equations: Finite Difference Methods for Boundary Problems

5 Ordinary Differential Equations: Finite Difference Methods for Boundary Problems 5 Ordinary Differential Equations: Finite Difference Metods for Boundary Problems Read sections 10.1, 10.2, 10.4 Review questions 10.1 10.4, 10.8 10.9, 10.13 5.1 Introduction In te previous capters we

More information

Error-Correcting Codes:

Error-Correcting Codes: Error-Correcting Codes: Progress & Challenges Madhu Sudan Microsoft/MIT Communication in presence of noise We are not ready Sender Noisy Channel We are now ready Receiver If information is digital, reliability

More information

1. Consider the trigonometric function f(t) whose graph is shown below. Write down a possible formula for f(t).

1. Consider the trigonometric function f(t) whose graph is shown below. Write down a possible formula for f(t). . Consider te trigonometric function f(t) wose grap is sown below. Write down a possible formula for f(t). Tis function appears to be an odd, periodic function tat as been sifted upwards, so we will use

More information

Domination Problems in Nowhere-Dense Classes of Graphs

Domination Problems in Nowhere-Dense Classes of Graphs LIPIcs Leibniz International Proceedings in Informatics Domination Problems in Nowere-Dense Classes of Graps Anuj Dawar 1, Stepan Kreutzer 2 1 University of Cambridge Computer Lab, U.K. anuj.dawar@cl.cam.ac.uk

More information

Dedicated to the 70th birthday of Professor Lin Qun

Dedicated to the 70th birthday of Professor Lin Qun Journal of Computational Matematics, Vol.4, No.3, 6, 4 44. ACCELERATION METHODS OF NONLINEAR ITERATION FOR NONLINEAR PARABOLIC EQUATIONS Guang-wei Yuan Xu-deng Hang Laboratory of Computational Pysics,

More information

MATH745 Fall MATH745 Fall

MATH745 Fall MATH745 Fall MATH745 Fall 5 MATH745 Fall 5 INTRODUCTION WELCOME TO MATH 745 TOPICS IN NUMERICAL ANALYSIS Instructor: Dr Bartosz Protas Department of Matematics & Statistics Email: bprotas@mcmasterca Office HH 36, Ext

More information

Math Spring 2013 Solutions to Assignment # 3 Completion Date: Wednesday May 15, (1/z) 2 (1/z 1) 2 = lim

Math Spring 2013 Solutions to Assignment # 3 Completion Date: Wednesday May 15, (1/z) 2 (1/z 1) 2 = lim Mat 311 - Spring 013 Solutions to Assignment # 3 Completion Date: Wednesday May 15, 013 Question 1. [p 56, #10 (a)] 4z Use te teorem of Sec. 17 to sow tat z (z 1) = 4. We ave z 4z (z 1) = z 0 4 (1/z) (1/z

More information

0.1 Differentiation Rules

0.1 Differentiation Rules 0.1 Differentiation Rules From our previous work we ve seen tat it can be quite a task to calculate te erivative of an arbitrary function. Just working wit a secon-orer polynomial tings get pretty complicate

More information

The Laplace equation, cylindrically or spherically symmetric case

The Laplace equation, cylindrically or spherically symmetric case Numerisce Metoden II, 7 4, und Übungen, 7 5 Course Notes, Summer Term 7 Some material and exercises Te Laplace equation, cylindrically or sperically symmetric case Electric and gravitational potential,

More information

INTRODUCTION AND MATHEMATICAL CONCEPTS

INTRODUCTION AND MATHEMATICAL CONCEPTS INTODUCTION ND MTHEMTICL CONCEPTS PEVIEW Tis capter introduces you to te basic matematical tools for doing pysics. You will study units and converting between units, te trigonometric relationsips of sine,

More information

Differential Calculus (The basics) Prepared by Mr. C. Hull

Differential Calculus (The basics) Prepared by Mr. C. Hull Differential Calculus Te basics) A : Limits In tis work on limits, we will deal only wit functions i.e. tose relationsips in wic an input variable ) defines a unique output variable y). Wen we work wit

More information

2.3 More Differentiation Patterns

2.3 More Differentiation Patterns 144 te derivative 2.3 More Differentiation Patterns Polynomials are very useful, but tey are not te only functions we need. Tis section uses te ideas of te two previous sections to develop tecniques for

More information

2.3 Algebraic approach to limits

2.3 Algebraic approach to limits CHAPTER 2. LIMITS 32 2.3 Algebraic approac to its Now we start to learn ow to find its algebraically. Tis starts wit te simplest possible its, and ten builds tese up to more complicated examples. Fact.

More information

INTRODUCTION AND MATHEMATICAL CONCEPTS

INTRODUCTION AND MATHEMATICAL CONCEPTS Capter 1 INTRODUCTION ND MTHEMTICL CONCEPTS PREVIEW Tis capter introduces you to te basic matematical tools for doing pysics. You will study units and converting between units, te trigonometric relationsips

More information

Linear-algebraic list decoding for variants of Reed-Solomon codes

Linear-algebraic list decoding for variants of Reed-Solomon codes Electronic Colloquium on Computational Complexity, Report No. 73 (2012) Linear-algebraic list decoding for variants of Reed-Solomon codes VENKATESAN GURUSWAMI CAROL WANG Computer Science Department Carnegie

More information