Hardness Amplification

Size: px
Start display at page:

Download "Hardness Amplification"

Transcription

1 Hardness Amplification

2 Synopsis 1. Yao s XOR Lemma 2. Error Correcting Code 3. Local Decoding 4. Hardness Amplification Using Local Decoding 5. List Decoding 6. Local List Decoding 7. Hardness Amplification Using Local List Decoding Computational Complexity, by Y. Fu Hardness Amplification 1 / 90

3 Yao s XOR Lemma Computational Complexity, by Y. Fu Hardness Amplification 2 / 90

4 Qizhi Yao. Theory and Applications of Trapdoor Functions. FOCS, Computational Complexity, by Y. Fu Hardness Amplification 3 / 90

5 For every boolean function one of the trivial circuits is correct on at least half the inputs, or equivalently wrong on at most half the inputs. The hardness of a boolean function is measured by the number of gates necessary to calculate the function. For ρ ( 1 2, 1] the ρ-average case hardness of f is the number of gates necessary to calculate f with ρ-correctness. The average case hardness of f is the number S of gates necessary to calculate f with ( S )-correctness. Computational Complexity, by Y. Fu Hardness Amplification 4 / 90

6 Average Case Hardness Suppose f : {0, 1} n {0, 1}. Let ρ ( 1 2, 1]. 1. The ρ-average case hardness of f, notation H ρ avg(f ), is the largest S such that for every circuit C with C S, one has Pr x R{0,1} n[c(x) = f (x)] < ρ. 2. The average case hardness of f, notation H avg (f ), is { } { } max S S H S avg (f ) = min S S H S avg (f ). In other words, H avg (f ) is the unique fixpoint of S = H S avg (f ). For f : {0, 1} {0, 1}, let H ρ avg(f )(n) be H ρ avg(f {0,1} n). Computational Complexity, by Y. Fu Hardness Amplification 5 / 90

7 Worst Case Hardness Intuitively the worst case hardness of f is the minimal number of gates necessary to calculate f. Technically H wrs (f ) is defined by H 1 avg(f ). Computational Complexity, by Y. Fu Hardness Amplification 6 / 90

8 A boolean function is mildly hard if every moderate size circuit is wrong on a non-negligible portion of the inputs. A boolean function is strongly hard if every moderate size circuit is wrong on almost half the inputs. Computational Complexity, by Y. Fu Hardness Amplification 7 / 90

9 Yao s XOR Lemma Theorem (Yao, 1982). For every f : {0, 1} n {0, 1}, 0 < δ < 1 2 and k N, if ɛ > 2(1 δ) k then H 1 2 +ɛ avg (f k ) ɛ2 400n H1 δ avg (f ), where f k : {0, 1} nk {0, 1} is defined by f k (x 1,..., x k ) = k f (x i ) = i=1 k f (x i ) (mod 2). i=1 Computing f k is as easy as computing f. If f is mildly hard, then f k is strongly hard. Computational Complexity, by Y. Fu Hardness Amplification 8 / 90

10 Yao s XOR Lemma is typical of Direct Product Theorems. If a problem is hard to solve, then solving multiple independent instances of the problem is even harder. Computational Complexity, by Y. Fu Hardness Amplification 9 / 90

11 Yao s XOR Lemma can be proved using Min-Max Theorem. Computational Complexity, by Y. Fu Hardness Amplification 10 / 90

12 Von Neumann s 1928 paper laid the foundation of game theory. The main theorem of the paper is the Min-Max Theorem. As far as I can see, there could be no theory of games... without that theorem. Computational Complexity, by Y. Fu Hardness Amplification 11 / 90

13 Von Neumann s Min-Max Theorem A zero-sum game is modeled by a payoff matrix A = (a ij ) of reals. A column player, or minimizer chooses an index j [n]. A row player, or maximizer chooses an index i [m]. In a mixed strategy, the column player, respectively the row player chooses a distribution over columns, respectively rows. Von Neumann s Min-Max Theorem states that max min r r m=1 c c n=1 ra c = min max c c n=1 r r m=1 ra c. The problem is algebraic; von Neumann s proof is topological. Computational Complexity, by Y. Fu Hardness Amplification 12 / 90

14 Minimizer chooses a mixed strategy (c 1,..., c n ). Its payoff is j [n] a ijc j if maximizer plays pure strategy i. If maximizer knows (c 1,..., c n ), it will choose i such that j [n] a ijc j is maximum. Minimizer wishes to choose the mixed strategy (c 1,..., c n ) such that c 0 = j [n] a ijc j is minimum among all strategies. Maximizer chooses a mixed strategy (r 1,..., r m ). Its payoff is i [m] r ia ij if minimizer plays pure strategy j. If minimizer knows (r 1,..., r m ), it will choose j such that i [m] r ia ij is minimum. Maximizer wishes to choose the mixed strategy (r 1,..., r m ) such that r 0 = i [m] r ia ij is maximum among all strategies. Computational Complexity, by Y. Fu Hardness Amplification 13 / 90

15 The Min-Max Theorem says that Minimizer has an optimal strategy (c 1,..., c n ) and Maximizer has an optimal strategy (r 1,..., r m ) such that c 0 = r 0, called the value of the game. Computational Complexity, by Y. Fu Hardness Amplification 14 / 90

16 Impagliazzo s Hardcore Lemma Suppose 0 < δ < 1. A distribution H over {0, 1} n has density-δ if for every x {0, 1} n. Pr[H = x] 1 δ 1 2 n R. Impagliazzo. Hard-Core Distributions for Somewhat Hard Problems. FOCS, Computational Complexity, by Y. Fu Hardness Amplification 15 / 90

17 Impagliazzo s Hardcore Lemma Lemma (Impagliazzo, 1995). Suppose δ (0, 1), ɛ > 0 and f : {0, 1} n {0, 1}. If H 1 δ avg (f ) S then there exists a density-δ distribution H such that for every circuit C of size at most ɛ 2 100n S, Pr x RH[C(x) = f (x)] ɛ. Every hard boolean function has a fraction of the input set, the hardcore set, on which the function is extremely hard. Computational Complexity, by Y. Fu Hardness Amplification 16 / 90

18 The lemma can be interpreted in terms of a zero-sum game. 1. Minimizer s pure strategy is a density-δ distribution H, which is a mixed strategy of flat distributions. 2. Maximizer s pure strategy is a circuit C of size ɛ2 100n S. The value of the game is Pr x RH[C(x) = f (x)]. A distribution over {0, 1} n is K-flat if it is the uniform distribution over a subset of {0, 1} n with size K. Maximizer s mixed strategy is a distribution C over circuits of size bounded by ɛ2 100n S. Computational Complexity, by Y. Fu Hardness Amplification 17 / 90

19 If H 1 δ avg (f ) S then there exists a density-δ distribution H such that for every circuit C of size at most ɛ 2 100n S, Pr x RH[C(x) = f (x)] ɛ. Impagliazzo s Hardcore Lemma says that Minimizer has an optimal strategy H, using which its loss is bounded by ɛ. I.e., ( max C ) min Pr C C,x H RH[C(x) = f (x)] ɛ. Computational Complexity, by Y. Fu Hardness Amplification 18 / 90

20 According to von Neumann s Min-Max Theorem, one only has to prove that Maximizer has an optimal mixed strategy C such that Pr C C,x RH[C(x) = f (x)] ɛ holds for some density-δ strategy. Computational Complexity, by Y. Fu Hardness Amplification 19 / 90

21 Towards a contradiction, assume that for the mixed strategy C, Pr C RC,x RH[C(x) = f (x)] > ɛ (1) holds for every density-δ distribution H. Call x {0, 1} n bad if Pr C C [C(x) = f (x)] ɛ and good o.w. There are < δ2 n bad x s; otherwise we could let H be the uniform distribution on the bad x s, which would fail (1). Computational Complexity, by Y. Fu Hardness Amplification 20 / 90

22 Construction of C. 1. Pick up C 1,..., C t R C independently. 2. For each x, let C(x) be the majority of C 1 (x),..., C t (x). Let t = 50n ɛ 2. Consequently C < S. By Chernoff bound, Pr C1,...,C t RC[C(x) f (x)] < 2 n for every good x. Using union bound, C. C(x) = f (x) for all good x s. Pr x RU n [C(x) = f (x)] > 1 δ since there are < δ2 n bad x s. This contradicts to the assumption H 1 δ avg (f ) S. Computational Complexity, by Y. Fu Hardness Amplification 21 / 90

23 Proof of Yao s XOR Lemma Suppose f : {0, 1} n {0, 1} and 0 < δ < 1 2 and ɛ > 2(1 δ)k. Assume towards a contradiction that H 1 δ avg (f ) S and C is of size S = ɛ2 400nS such that Pr (x1,...,x k ) RU k n [C(x 1,..., x k ) = k f (x i )] 1 + ɛ. (2) 2 i=1 Computational Complexity, by Y. Fu Hardness Amplification 22 / 90

24 Proof of Yao s XOR Lemma Let H be the density-δ distribution of Impagliazzo s Lemma. Impagliazzo s proof of Yao s Lemma makes use of the observation that U n can be generated from a density-δ distribution. Toss a biased coin whose Head comes up with probability δ. If the coin comes up Head, then pick up an x H {0, 1} n. If the coin comes up Tail, then pick ( up an x G {0, 1} n, where G is defined by Pr[G = x] = δ 2 δpr[h = x] ). n We can now write U n = (1 δ)g + δh. Computational Complexity, by Y. Fu Hardness Amplification 23 / 90

25 Proof of Yao s XOR Lemma Let (U n ) k be the distribution of picking up independently and uniformly k strings from {0, 1} n. Using the previous notation, (U n ) k = (1 δ) k G k + (1 δ) k 1 δg k 1 H δ k H k. Let P G k 1 H for example denote [ Pr (x1,...,x k ) RG k 1 H C(x 1,..., x k ) = ] k f (x i ). i=1 It follows from (2) that ɛ (1 δ)k P G k + (1 δ) k 1 δp G k 1 H δ k P H k < ɛ 2 + (1 δ)k 1 δp G k 1 H δ k P H k. Computational Complexity, by Y. Fu Hardness Amplification 24 / 90

26 Proof of Yao s XOR Lemma By averaging principle we may assume wlog that ɛ 2 < P G k 1 H, Pr x1,...,x k 1 RG,x k RH[C(x 1,..., x k ) = k f (x i )] > ɛ 2. i=1 Using averaging principle again, some x 1,..., x k 1 exist such that Pr xk RH[C(x 1,..., x k ) = k f (x i )] > ɛ 2, i=1 meaning that we have an S -sized circuit calculating f on inputs chosen according to H with probability better than ɛ 2. This contradicts to the assumption that H is a hardcore. Computational Complexity, by Y. Fu Hardness Amplification 25 / 90

27 Error Correcting Code Computational Complexity, by Y. Fu Hardness Amplification 26 / 90

28 Shannon s 1948 paper laid the foundation of Information Theory. A Mathematical Theory of Communication. Bell System Technical Journal 27(3): , Computational Complexity, by Y. Fu Hardness Amplification 27 / 90

29 As a major application of Shannon s theory, Coding Theory studies coding techniques for efficient and robust data transmission. Data compression (source coding) Error correction (channel coding) Richard Hamming pioneered the second field and invented the first error correcting code, now known as Hamming Code. Error Detecting and Error Correcting Codes. Bell System Technical Journal 29(2): , Computational Complexity, by Y. Fu Hardness Amplification 28 / 90

30 By introducing redundancy an error correcting code allows receiver to correct errors without retransmission. Technically an error-correcting code maps two distinct strings to two very different strings. It amplifies message differences. Computational Complexity, by Y. Fu Hardness Amplification 29 / 90

31 The fractional Hamming distance (x, y) of x, y {0, 1} m is defined by (x, y) = 1 m {i x i y i }. A function E : {0, 1} n {0, 1} m is an error-correcting code (ECC) with distance δ [0, 1], if (E(x), E(y)) δ whenever x y. {E(x) x {0, 1} n } is the set of codewords of E. If a code has distance δ, then a codeword with no more than δ 2 of its coordinates corrupted can be uniquely recovered. We shall always understand δ 2 as δ 2. Computational Complexity, by Y. Fu Hardness Amplification 30 / 90

32 Explicit Code We need explicit functions E : {0, 1} n {0, 1} m that can be encoded and decoded efficiently. There is a poly(m) algorithm to compute E. There is a poly(m) algorithm to compute x from y such that (y, E(x)) δ 2. Computational Complexity, by Y. Fu Hardness Amplification 31 / 90

33 Walsh-Hadamard Code The Walsh-Hadamard function WH : {0, 1} n {0, 1} 2n encodes a string of length n by a linear function in n variables over GF(2): WH(u) : x u x, where u x = n i=1 u iy i (mod 2). Walsh-Hadamard Code is a binary linear block code with distance 1 2 and block length 2n, where n is the message length. The rate of the code is 2n n. Computational Complexity, by Y. Fu Hardness Amplification 32 / 90

34 Linear block codes exploit the algebraic properties of finite fields. They can be decoded in P-time to their block length. Reed-Solomon Code defined next is a non-binary linear block code. Computational Complexity, by Y. Fu Hardness Amplification 33 / 90

35 Reed-Solomon Code The idea is to interpret a length n message as the coefficients of a degree n 1 polynomial P(x) over a finite field F and create a length m codeword by over-sampling P(x) at m distinct points. The message is recovered from the codeword by interpolation. Let F be a finite field and n m F. The Reed-Solomon Code RS : F n F m maps a 1... a n onto b 1... b m where b j = A(f j ), where A(x) = a 1 + a 2 x +..., a n x n 1, f 1,..., f m F are pairwise distinct. The code is q-ary if F = GF(q). Lemma. Reed-Solomon Code RS : F n F m has distance 1 n 1 m. Computational Complexity, by Y. Fu Hardness Amplification 34 / 90

36 Reed-Solomon Code The Reed-Solomon Code is the linear transformation defined by the generator matrix, in fact the transpose of the Vandermonde matrix. 1 f 1... f n 2 1 f n f i... f n 2 i f n 1 i f m. f n 2 m f n 1 m a 1. a i. a n = b 1. b j. b m Computational Complexity, by Y. Fu Hardness Amplification 35 / 90

37 Decoding Reed-Solomon Code Theorem (Berlekamp-Welsh, 1986). There is a P-time algorithm that, given a list {(f j, b j )} j [m] F F, outputs G : F n F m such that {G(f i ) = b i } i I with I [m] and I m+n+1 2. Reed-Solomon Code is an ECC with distance m n+1 m. If the number of corrupted bits is ρm 1 m n+1 2 m m = m n+1 2, then there is an efficient decoder. Computational Complexity, by Y. Fu Hardness Amplification 36 / 90

38 Decoding Reed-Solomon Code We look for an (n 1)-degree polynomial G(x) such that G(f i ) = b i holds for a subset of [m] of size I. Let E(x) be the m n 1 2 -degree non-zero error locator polynomial such that G(f i )E(f i ) = b i E(f i ) (3) holds for all i [m]. Intuitively E(f i ) = 0 if G(f i ) b i. We linearize the left hand of (3) by an m+n 3 2 -degree polynomial C(f i ) = b i E(f i ), producing m linear equations in m variables. C(x) and E(x) solved. We get G(x) by solving the equations {G(f i ) = b i } i [m] E(fi ) 0. Computational Complexity, by Y. Fu Hardness Amplification 37 / 90

39 Reed-Muller Code Let F be a finite field, and let l, d be numbers such that d < F. The Reed-Muller Code RM : F (l+d d ) F F l maps P(x 1,..., x l ) = c i1,...,i l x i x i l l, i i l d an l-variate polynomial over F of degree d, onto the string (P(x 1,..., x l )) x1,...,x l F. The combination of buying d, respectively d items from a shop selling l kinds of items is ( ) ( l+d 1 d, respectively l+d ) d. RM has distance 1 d F by Schwartz-Zippel Lemma. RM is a generalization of WH (F={0, 1}, d=1) and RS (l=1). Computational Complexity, by Y. Fu Hardness Amplification 38 / 90

40 Concatenated Code Dave Forney (1967) pointed out that one can combine two codes to get one gaining the virtues of both. Suppose E 1 : {0, 1} n Σ m and E 2 : Σ {0, 1} k are ECC s with distance δ 1 and δ 2 respectively. The concatenated code E 2 E 1 : {0, 1} n {0, 1} mk with distance δ 1 δ 2 maps x onto E 2 (E 1 (x) 1 )... E 2 (E 1 (x) m ). E 1 is the outer code, and E 2 is the inner code. Computational Complexity, by Y. Fu Hardness Amplification 39 / 90

41 Concatenated Code Consider RS : {0, 1} n log F = F n F m and WH : F {0, 1} F. WH(RS(x)) = WH(RS(x) 1 )... WH(RS(x) n ). Computational Complexity, by Y. Fu Hardness Amplification 40 / 90

42 Decoding Concatenated Code Suppose E 1 : {0, 1} n Σ m and E 2 : Σ {0, 1} k are ECC s with distance δ 1 and δ 2 respectively. A decoder for E 2 E 1 : {0, 1} n {0, 1} mk can handle δ 1 δ 2 errors. Computational Complexity, by Y. Fu Hardness Amplification 41 / 90

43 Local Decoding Computational Complexity, by Y. Fu Hardness Amplification 42 / 90

44 How do we apply ECC technique to study hardness amplification? Computational Complexity, by Y. Fu Hardness Amplification 43 / 90

45 A function f : {0, 1} n {0, 1} is identified to a string in {0, 1} 2n. Let E : {0, 1} 2n {0, 1} 2cn be an ECC with distance δ. Both E(f ) and a circuit D approximating E(f ) are functions of type {0, 1} cn {0, 1}. If (D, E(f )) δ 2, then f can be recovered from D. In other words, if f is hard to compute in the worst case, E(f ) is hard to compute in the average case. Important. We do not have to output E(f ), which is too long. We only need to be able to compute f, which amounts to calculating the single bit f (x) given x. Computational Complexity, by Y. Fu Hardness Amplification 44 / 90

46 A locally decodable code is an error-correcting code that allows a single bit of the original word to be recovered with high probability by only querying a small number of bits of a corrupted codeword. Computational Complexity, by Y. Fu Hardness Amplification 45 / 90

47 Local Decoder Let E : {0, 1} n {0, 1} m be an ECC and q N and ρ (0, 1). A local decoder for E of q queries handling ρ errors is a PTM that, given some j [n] and y {0, 1} m such that (y, E(x)) < ρ for some x {0, 1} n, makes q random accesses to y and outputs in polylog(m) time x j with probability 2 3. By random access to y we mean that for each i [m] the bit y i can be fetched in polylog(m) time. Computational Complexity, by Y. Fu Hardness Amplification 46 / 90

48 Local Decoder for Walsh-Hadamard Theorem. For every ρ < 1 4 decoder handling ρ errors. the Walsh-Hadamard code has a local The following algorithm makes two random queries to corrupted f : Input: j [n], f {0, 1} 2n st x {0, 1} n.pr y [f (y) x y] ρ < 1 4. Output: x j with probability 1 2ρ. 1. Choose y R {0, 1} n. 2. Output f (y) + f (y + e j ). By union bound the following hold with probability 1 2ρ. f (y) + f (y + e j ) = x y + x (y + e j ) = x e j = x j. Computational Complexity, by Y. Fu Hardness Amplification 47 / 90

49 Local Decoder for Reed-Muller Theorem. Suppose F is a finite field and l, d are numbers. There is a poly( F, l, d) time local decoder for the Reed-Muller ECC handling (1 d F )/6 errors. An l variable degree-d polynomial P is (represented by) a list of values on its first ( ) l+d d inputs. RM(P) is a list of values on all F l inputs. We prove that given x F l and a corrupted version of RM(P) we can efficiently recover P(x) with high probability. Computational Complexity, by Y. Fu Hardness Amplification 48 / 90

50 Input: x F l, f F Fl st Pr x F l[f (x) P(x)] < ρ = (1 d F )/6. Output: P(x) with probability 2/3. 1. Choose r R F l. 2. Query f on the random elements of {x + t r} t F. 3. Run Reed-Solomon Decoder to obtain Q(v) = P(x + v r). 4. Output Q(0). f and P differ in < ρ F elements of F l. By Markov inequality the probability that f and P differ in at least 3ρ F = (1 d F ) F /2 such points is < 1 3. With 2 d 3 probability f and P differ in less than a (1 F )/2 fraction of the points. Notice that 1 d F is the distance of the Reed-Solomon code decoded in Step 3. Computational Complexity, by Y. Fu Hardness Amplification 49 / 90

51 Local Decoder for Concatenated Code Theorem. Let E 1 : {0, 1} n Σ m and E 2 : Σ {0, 1} k be ECC s with local decoders of q 1, resp. q 2 queries and ρ 1, resp. ρ 2 errors. Then there is a local decoder of O(q 1 q 2 log(q 1 ) log Σ ) queries handling ρ 1 ρ 2 errors for the concatenated code E 2 E 1. Less than a ρ 1 fraction of the length k blocks in y can be of distance > ρ 2 from the corresponding blocks in E 2 (E 1 (x)). Computational Complexity, by Y. Fu Hardness Amplification 50 / 90

52 Hardness Amplification Using Local Decoding Computational Complexity, by Y. Fu Hardness Amplification 51 / 90

53 A local decoder is a random algorithm. We need to derandomize it in order to talk about worst case hardness. Computational Complexity, by Y. Fu Hardness Amplification 52 / 90

54 BPP P /poly. If f (x) {0,1} n can be computed in P-time probabilistically, it can be computed by a P-size circuit exactly. Computational Complexity, by Y. Fu Hardness Amplification 53 / 90

55 Worst Case Hardness Mild Hardness Theorem. Let S : N N, f E and H wrs (f )(n) S(n) for all n. There are g E and c > 0 such that H 0.99 avg (g)(n) S(n/c)/n c for every sufficiently large n. A function f : {0, 1} n {0, 1} is seen as a string in {0, 1} 2n. Let E : {0, 1} N {0, 1} Nc be an ECC, where N = 2 n. Let g(x) = E(f )(x) for x {0, 1} cn = N c. For g to satisfy the property of the theorem, we need the following: There is a local decoder for E that uses polylog(n c ) running time and queries and can handle a 0.01 fraction of errors. E = DTIME(2 O(n) ). Computational Complexity, by Y. Fu Hardness Amplification 54 / 90

56 Worst Case Hardness Mild Hardness Suppose g : {0, 1} cn {0, 1} were computed by a circuit D of size < S(cn/c)/(cn) c and is correct on 99% of the inputs. By definition a local decoder for E calculates in poly(n) time f : {0, 1} n {0, 1} with probability 2 3. The local decoder queries D for a poly(n) times. By BPP P /poly a decoder can be implemented by a circuit of poly(n) size that calculates f exactly. So the local decoding can be done by a circuit of size S(n). Conclude that f could be calculated by a circuit of size < S(n), contradicting to the assumption. Computational Complexity, by Y. Fu Hardness Amplification 55 / 90

57 Worst Case Hardness Mild Hardness E = WH RM. 1. The Reed-Muller code RM is given by the following parameters: F = log 5 (N), l = log(n)/ log log(n), d = log 2 (N). Our RM is of type F (l+d d ) F F l. Since ( ) l+d d > N, by padding we may restrict RM to {0, 1} N. F F l = ({0, 1} log F ) F l = ({0, 1} 5 log log(n) ) N5. 2. The Walsh-Hadamard code WH is of type F = {0, 1} log F = {0, 1} 5 log log N {0, 1} log5 (N) = {0, 1} F. Hence E : {0, 1} N {0, 1} N6. Computational Complexity, by Y. Fu Hardness Amplification 56 / 90

58 By combining with Yao s XOR Lemma, we can promote a worst case hardness to a strong (average case) hardness. Computational Complexity, by Y. Fu Hardness Amplification 57 / 90

59 Theorem. Let S : N N be monotone and time constructible. If there are d (0, 1) and f E such that H wrs (f )(n) S(n), then there exists f E such that H avg ( f )(n) S( n) d. By the previous theorem, H 0.99 avg (g)(n) S (n) = S(n/c)/poly(n) for some g E and every sufficiently large n. Consider g k where k = c log S (n) for a sufficiently small c. 2(1 δ) k = 2, where d = c log By Yao s Lemma, S (n) d S (n) d 2 Havg (g k (n)) 1 400n 4 S H0.99 (n) 2d avg (g(n)) S (n) d. 2 It follows that H avg (g k )(n) S (n) d 2 S( n) d for large n. Yao s Lemma. If ɛ > 2(1 δ) k then H 1 2 +ɛ avg (h k ) ɛ2 400n H1 δ avg (h). Computational Complexity, by Y. Fu Hardness Amplification 58 / 90

60 Can strong hardness amplification be achieved directly using ECC? Computational Complexity, by Y. Fu Hardness Amplification 59 / 90

61 List Decoding Computational Complexity, by Y. Fu Hardness Amplification 60 / 90

62 How do we handle errors larger than half the distance? There are many words whose codewords fall into the region. Johnson Bound points out however that the number of such codewords cannot be too large. Sometimes in applications we can afford the time to compute all candidate codewords. Computational Complexity, by Y. Fu Hardness Amplification 61 / 90

63 We can then single out the original message from a list of candidate messages using additional information. For example we can pin down a polynomial from a finite number of polynomials using a pair (x 0, y 0 ). Computational Complexity, by Y. Fu Hardness Amplification 62 / 90

64 Johnson Bound Theorem (Johnson, 1962). If E : {0, 1} n {0, 1} m is an ECC with distance 1 2 ɛ, then for every y {0, 1}m and δ ɛ, 1 there are at most codewords y 2δ 2 1,..., y l such that (y, y i ) 1 2 δ for every i [l]. Computational Complexity, by Y. Fu Hardness Amplification 63 / 90

65 Suppose y, y 1,..., y l satisfy the property of the theorem. Define z 1,..., z l R m by { 1, if yi (k) = y(k), z i (k) = 1, otherwise. Since (y, y i ) 1 2 δ, m z i (k) k=1 ( ) ( ) δ m 2 δ m = 2δm. If i j, then (y i, y j ) 1 2 ɛ implies that m ( ) ( ) 1 1 z i, z j = z i (k)z j (k) 2 + ɛ m 2 ɛ m = 2ɛm 2δ 2 m. k=1 Computational Complexity, by Y. Fu Hardness Amplification 64 / 90

66 Set w = l i=1 z i. Then m k=1 w(k) = m k=1 l i=1 z i(k) 2δml, m w, w = w(k) 2 = k=1 l i=1 z i, z i + i j z i, z j ml + 2δ 2 ml 2. By Jensen s Inequality, ( ) 2 1 m w(k) 1 m m k=1 m k=1 Now l 1 2δ 2 follows from 4δ 2 l 2 l + l 2 2δ 2. w(k) 2 = 1 w, w. m Jensen s Inequality. f (E[X ]) E[f (X )] for random variable X and convex function f. A function f : R R is convex if for every p [0, 1] and all x, y R, f (px + (1 p)y) pf (x) + (1 p)f (y). Computational Complexity, by Y. Fu Hardness Amplification 65 / 90

67 For explicit codes there are list decoding algorithms that can tolerate a corruption rate well above 50%. Computational Complexity, by Y. Fu Hardness Amplification 66 / 90

68 Madhu Sudan provided the first list decoding algorithm for the Reed-Solomon Code. Decoding of Reed-Solomon Codes beyond the Error Correction Bound. FOCS Computational Complexity, by Y. Fu Hardness Amplification 67 / 90

69 List Decoding Reed-Solomon Sudan s algorithm can handle a corruption up to 1 2 n/m. This is surprising since 1 2 n/m can be very close to 1. Theorem (Sudan, 1996). There is a P-time algorithm that, given a set {(f i, b i )} m i=1 F2, returns the set of all degree n polynomial G such that {i G(f i ) = b i } > 2 mn. f 1,..., f m are the distinct points of the code, and b 1... b m is the codeword. The candidates are the G(x) s guaranteed by the theorem. A list decoding algorithm runs in P-time to the block length. Computational Complexity, by Y. Fu Hardness Amplification 68 / 90

70 Algorithm: Interpolation + Factorization 1. Find a bivariate polynomial Q(x, y) with degree mn in x and m/n in y such that Q(f i, b i ) = 0 for all i [m]. The number of coefficient is ( mn + 1)( m/n + 1) > m. The m linear equations have nonzero solutions. 2. Factorize Q(x, y). 3. If y G(x) is a factor, check if G(x) has degree n and G(f i ) = b i for > 2 mn pairs. If so, output G(x). 1. A. Lenstra, H. Lenstra, L. Lovász. Factoring Polynomials with Rational Coefficients. Mathematische Annalan, E. Kaltofen. A Polynomial Reduction from Bivariate to Univariate Integer Polynomial Factorization. FOCS, E. Kaltofen. A Polynomial Reduction from Multivariate to Bivariate Integer Polynomial Factorization. STOC, Computational Complexity, by Y. Fu Hardness Amplification 69 / 90

71 Correctness Fact. If a degree n polynomial G(x) agrees with {(f i, b i )} m i=1 in more than 2 mn places, then y G(x) is a factor of Q(x, y). 1. Q(x, G(x)) has degree mn + n m/n = 2 mn. 2. Q(x, G(x)) = 0 since Q(x, G(x)) = 0 has > 2 mn solutions. 3. Q(x, y) = (y G(x))A(x, y) + R(x) for some A(x, y), R(x). 4. R(x) = 0 by substituting G(x) for y in the equality. Conclude that y G(x) is a factor of Q(x, y). Remark. There are no more than m/n candidate G(x) s. Computational Complexity, by Y. Fu Hardness Amplification 70 / 90

72 Local List Decoding Computational Complexity, by Y. Fu Hardness Amplification 71 / 90

73 To make use of list decoding for hardness amplification, we need to provide local list decoding algorithms for the codes we use. Computational Complexity, by Y. Fu Hardness Amplification 72 / 90

74 Let E : {0, 1} n {0, 1} m be an ECC and let ρ = 1 ɛ for ɛ > 0. An algorithm D is a local list decoder for E handling ρ errors, if for every x {0, 1} n and y {0, 1} m satisfying (E(x), y) ρ, there exists a number i 0 [poly(n/ɛ)] such that for every j [n], on inputs i 0, j and with random access to y, D runs for poly(log(m)/ɛ) time and outputs x j with probability at least 2 3. y is the corrupted version of some codeword; x is a candidate for the original message of y. i 0 is the piece of the information that pins down x. [poly(n/ɛ)] is the number of candidate original messages. Computational Complexity, by Y. Fu Hardness Amplification 73 / 90

75 Local List Decoding of Walsh-Hadamard Theorem. The Walsh-Hadamard Code has a local list decoder handling 1 2 ɛ errors. This is essentially the proof of the Goldreich-Levin Theorem. Computational Complexity, by Y. Fu Hardness Amplification 74 / 90

76 Local List Decoding of Reed-Muller Theorem. The Reed-Muller Code has a local list decoder handling 1 10 d F errors. Algorithm: D runs in time poly( F, l, d). 1. Condition: f agrees with an l-variable d-degree polynomial P : F l F on 10 d F fraction of the inputs. 2. Input: random access to f : F l F, an index i 0 [F l+1 ], interpreted as a pair (x 0, y 0 ) F l F, and x F l. 3. Output: D(i 0, x) such that Pr[D(i 0, x) = P(x)] 2 3. i 0 = (x 0, y 0 ) is the outside information that allows one to pin down the candidate polynomial P(x) satisfying P(x 0 ) = y 0. Computational Complexity, by Y. Fu Hardness Amplification 75 / 90

77 Local List Decoding of Reed-Muller A local list decoder outputs P(x) with high probability for every x. The algorithm we will describe next only outputs P(x) with high probability for most x. This is sufficient since we can apply the local decoder for the Reed-Muller Code to get a proper local list decoder. Computational Complexity, by Y. Fu Hardness Amplification 76 / 90

78 Algorithm Reed-Muller Local List Decoder handling ρ 1 10 d F errors. 1. Condition: f agrees with an l-variable d-degree polynomial P : F l F on 10 d F fraction of the inputs, F > d 4 and d is large enough; 2. Input: random access to f : F l F, an index i 0 [F l+1 ], interpreted as a pair (x 0, y 0 ) F l F, and x F l ; 3. Output: y such that Pr coins,x F l[y = P(x)] 0.9. Computational Complexity, by Y. Fu Hardness Amplification 77 / 90

79 Algorithm The idea is to use Sudan s list decoder for Reed-Solomon Code. Input: x, corrupted f, and (x 0, y 0 ). Output: P(x) with high probability. 1. Transfer the multivariate polynomial P to the univariate polynomial G(z) = P(q(z)) such that G(0) = P(x); 2. Recover G(z) from f (q(z)); 3. Output G(0). Condition: r.q(r) = x 0, i.e. (r, y 0 ) is an index for G(z). Computational Complexity, by Y. Fu Hardness Amplification 78 / 90

80 Algorithm Step 1. Choose r R F. Step 2. Get a random degree-3 univariate polynomial q : F F l : a 1 b 1 c 1 x z 3 1 q : z.... z 2 z a l b l c l x l 1 such that q(0) = x and q(r) = x 0. Set L x,x0 = {q(t) t F}. Step 3. Query f on L x,x0 to get S x,x0 = {(t, f (q(t))) t F}. Step 4. Run Sudan s algorithm to get a list G 1,..., G k of 3d degree polynomials that agree with S x,x0 in at least 8 d F pairs. Step 5. Output G i (0) if!i.g i (r) = y 0 ; otherwise output nothing. Computational Complexity, by Y. Fu Hardness Amplification 79 / 90

81 If x 0 R F l and y 0 = P(x 0 ), Algorithm has an identical output to the output of Algorithm defined next. 1. In Algorithm, we randomly choose r, a, b and x In Algorithm, we randomly choose r, a, b, c. Notice that x 0 and c are correlated. Computational Complexity, by Y. Fu Hardness Amplification 80 / 90

82 Algorithm Step 1. Get a random degree-3 univariate polynomial q : F F l : a 1 b 1 c 1 x z 3 1 q : z.... z 2 z a l b l c l x l 1 such that q(0) = x. Set L x = {q(t) t F}. Step 2. Query f on L x to get S x = {(t, f (q(t))) t F}. Step 3. Run Sudan s algorithm to get a list G 1,..., G k of 3d degree polynomials that agree with S x in at least 8 d F pairs. Step 4. Choose r R F. Let y 0 = P(q(r)). [we do not know P] Step 5. Output G i (0) if!i.g i (r) = y 0 ; otherwise output nothing. Computational Complexity, by Y. Fu Hardness Amplification 81 / 90

83 We prove that Algorithm outputs P(x) with probability 0.9 over the choice of x 0 and the random choices of the algorithm. By averaging argument, there is some pair (x 0, y 0 ) such that Algorithm outputs P(x) with probability 0.9. Computational Complexity, by Y. Fu Hardness Amplification 82 / 90

84 Analysis of Algorithm Let q(z) = az 3 +bz 2 +cz+x where a, b, c are chosen randomly independently. Given x 0, z such that z 0, one has 1 Pr a RFPr b RFPr c RF[q(z)=x 0 ] = Pr a RFPr b RF F = 1 F. Given x 1 0, z 1, x 2 0, z 2 such that 0 z 1 z 2 0, one has a(z z 1 z 2 + z 2 2 ) + b(z 1 + z 2 ) + c = x 1 0 x 2 0 z 1 z 2. Now either z 1 + z 2 0 or z z 1z 2 + z Consequently Pr a RFPr b RFPr c RF[q(z 1 )=x 1 0 q(z 2 )=x 2 0 ] = 1 F 2. We conclude that q(z) is pairwise independent on points 0. Computational Complexity, by Y. Fu Hardness Amplification 83 / 90

85 Analysis of Algorithm 1. Suppose F = {f 1,..., f m }. Define the random variable X i by X i = { 1, if f (q(fi )) = P(q(f i )), 0, o.w. By assumption E[X i ] = 10 d/ F. Let X = m i=1 X i. Then E[X ] = 10 d F. With probability 0.99 the function f agrees with P on 8 d F points of L x since by Chebychev Ineq. [ Pr X < 8 ] d F [ = Pr 10 d F X > 2 ] d F [ X ] < Pr 10 d F > 2 d F [ X ] < Pr 10 d F > 2 d Var(X ) < 1/4d 2 < Computational Complexity, by Y. Fu Hardness Amplification 84 / 90

86 Analysis of Algorithm 2. Sudan s algorithm ensures that the list G 1,..., G k obtained in Step 3 contains G. 3. There cannot be more than F /3d polynomials in the list. See the remark on page Let d = 1000 and F > d 4. By union bound Pr r [G(r) G i (r)] 1 F 3d F > d Algorithm outputs G with probability > > 0.9. Computational Complexity, by Y. Fu Hardness Amplification 85 / 90

87 Concatenated Local List Decoder Let E 1 : {0, 1} n Σ m and E 2 : Σ {0, 1} k be local list decoders. E 1 takes an index in I 1 and handles 1 ɛ 1 errors. E 2 takes an index in I 2 and handles 1 ɛ 2 errors. The concatenated code E 2 E 1 : {0, 1} n {0, 1} mk takes an index in I 1 I 2 and handles 1 ɛ 1 ɛ 2 I 2 errors. Suppose y is a corrupted version of the codeword of x. There are at least ɛ 1 I 2 blocks that are ɛ 2 -correct. So there are at least ɛ 1 blocks upon which E 2 can output the correct symbols with high probability using some i 2 I 2. Computational Complexity, by Y. Fu Hardness Amplification 86 / 90

88 Hardness Amplification Using Local List Decoding Computational Complexity, by Y. Fu Hardness Amplification 87 / 90

89 Worst Case Hardness Strong Hardness Theorem. Let S : N N be monotone and time constructible. Let f E be such that H wrs (f )(n) S(n). Then there are some g E and c > 0 such that H avg (g)(n) S(n/c) 1 c for large n. A function f : {0, 1} n {0, 1} is seen as a string in {0, 1} 2n. 1. Let E : {0, 1} N {0, 1} Nc be an ECC, where N = 2 n. 2. Let g(x) = E(f )(x) for x {0, 1} cn = N c. Since we need to deal with corruption rate nearly 50%, we need E to have a local list decoder. Computational Complexity, by Y. Fu Hardness Amplification 88 / 90

90 Worst Case Hardness Strong Hardness E = WH RM. 1. The Reed-Muller code RM, which is of type F (l+d d ) F F l is given by the following parameters: F = S(n) δ, d = F, l = 2 log(n)/ log(d). 2. The Walsh-Hadamard code WH is of type Hence E : F (l+d d ) F F l+1. F = {0, 1} log F {0, 1} F. N = 2 n, n = cn and N = 2 cn. Wlog, d > (log N) 3 and S(n) < N and ( l+d d ) > ( d l )l. Computational Complexity, by Y. Fu Hardness Amplification 89 / 90

91 Worst Case Hardness Strong Hardness Suppose ɛ (0, 1) and g : {0, 1} cn {0, 1} could be computed by a size < S(n) ɛ circuit D correct on a fraction S(n) of inputs. ɛ By definition a local list decoder for E calculates in poly(s(n) δ ) time f : {0, 1} n {0, 1} with probability 2 3. By BPP P /poly a precise decoder can be implemented by a poly(s(n) δ ) size circuit. The local list decoder queries D for a poly(s(n) δ ) times. Consequently f could be calculated by a circuit of size less than poly(s(n) δ )S(n) ɛ < S(n) by hardwiring the index for f and by setting δ small enough, contradicting to the assumption. Computational Complexity, by Y. Fu Hardness Amplification 90 / 90

Computational Complexity: A Modern Approach

Computational Complexity: A Modern Approach i Computational Complexity: A Modern Approach Sanjeev Arora and Boaz Barak Princeton University http://www.cs.princeton.edu/theory/complexity/ complexitybook@gmail.com Not to be reproduced or distributed

More information

Lecture 3: Error Correcting Codes

Lecture 3: Error Correcting Codes CS 880: Pseudorandomness and Derandomization 1/30/2013 Lecture 3: Error Correcting Codes Instructors: Holger Dell and Dieter van Melkebeek Scribe: Xi Wu In this lecture we review some background on error

More information

CS151 Complexity Theory. Lecture 9 May 1, 2017

CS151 Complexity Theory. Lecture 9 May 1, 2017 CS151 Complexity Theory Lecture 9 Hardness vs. randomness We have shown: If one-way permutations exist then BPP δ>0 TIME(2 nδ ) ( EXP simulation is better than brute force, but just barely stronger assumptions

More information

Lecture Introduction. 2 Formal Definition. CS CTT Current Topics in Theoretical CS Oct 30, 2012

Lecture Introduction. 2 Formal Definition. CS CTT Current Topics in Theoretical CS Oct 30, 2012 CS 59000 CTT Current Topics in Theoretical CS Oct 30, 0 Lecturer: Elena Grigorescu Lecture 9 Scribe: Vivek Patel Introduction In this lecture we study locally decodable codes. Locally decodable codes are

More information

Decoding Reed-Muller codes over product sets

Decoding Reed-Muller codes over product sets Rutgers University May 30, 2016 Overview Error-correcting codes 1 Error-correcting codes Motivation 2 Reed-Solomon codes Reed-Muller codes 3 Error-correcting codes Motivation Goal: Send a message Don t

More information

Questions Pool. Amnon Ta-Shma and Dean Doron. January 2, Make sure you know how to solve. Do not submit.

Questions Pool. Amnon Ta-Shma and Dean Doron. January 2, Make sure you know how to solve. Do not submit. Questions Pool Amnon Ta-Shma and Dean Doron January 2, 2017 General guidelines The questions fall into several categories: (Know). (Mandatory). (Bonus). Make sure you know how to solve. Do not submit.

More information

Lecture 4: Codes based on Concatenation

Lecture 4: Codes based on Concatenation Lecture 4: Codes based on Concatenation Error-Correcting Codes (Spring 206) Rutgers University Swastik Kopparty Scribe: Aditya Potukuchi and Meng-Tsung Tsai Overview In the last lecture, we studied codes

More information

CS Communication Complexity: Applications and New Directions

CS Communication Complexity: Applications and New Directions CS 2429 - Communication Complexity: Applications and New Directions Lecturer: Toniann Pitassi 1 Introduction In this course we will define the basic two-party model of communication, as introduced in the

More information

Error Correcting Codes Questions Pool

Error Correcting Codes Questions Pool Error Correcting Codes Questions Pool Amnon Ta-Shma and Dean Doron January 3, 018 General guidelines The questions fall into several categories: (Know). (Mandatory). (Bonus). Make sure you know how to

More information

Lecture 12: November 6, 2017

Lecture 12: November 6, 2017 Information and Coding Theory Autumn 017 Lecturer: Madhur Tulsiani Lecture 1: November 6, 017 Recall: We were looking at codes of the form C : F k p F n p, where p is prime, k is the message length, and

More information

Lecture 19 : Reed-Muller, Concatenation Codes & Decoding problem

Lecture 19 : Reed-Muller, Concatenation Codes & Decoding problem IITM-CS6845: Theory Toolkit February 08, 2012 Lecture 19 : Reed-Muller, Concatenation Codes & Decoding problem Lecturer: Jayalal Sarma Scribe: Dinesh K Theme: Error correcting codes In the previous lecture,

More information

Notes 10: List Decoding Reed-Solomon Codes and Concatenated codes

Notes 10: List Decoding Reed-Solomon Codes and Concatenated codes Introduction to Coding Theory CMU: Spring 010 Notes 10: List Decoding Reed-Solomon Codes and Concatenated codes April 010 Lecturer: Venkatesan Guruswami Scribe: Venkat Guruswami & Ali Kemal Sinop DRAFT

More information

1 Randomized Computation

1 Randomized Computation CS 6743 Lecture 17 1 Fall 2007 1 Randomized Computation Why is randomness useful? Imagine you have a stack of bank notes, with very few counterfeit ones. You want to choose a genuine bank note to pay at

More information

Local correctability of expander codes

Local correctability of expander codes Local correctability of expander codes Brett Hemenway Rafail Ostrovsky Mary Wootters IAS April 4, 24 The point(s) of this talk Locally decodable codes are codes which admit sublinear time decoding of small

More information

: Error Correcting Codes. October 2017 Lecture 1

: Error Correcting Codes. October 2017 Lecture 1 03683072: Error Correcting Codes. October 2017 Lecture 1 First Definitions and Basic Codes Amnon Ta-Shma and Dean Doron 1 Error Correcting Codes Basics Definition 1. An (n, K, d) q code is a subset of

More information

Last time, we described a pseudorandom generator that stretched its truly random input by one. If f is ( 1 2

Last time, we described a pseudorandom generator that stretched its truly random input by one. If f is ( 1 2 CMPT 881: Pseudorandomness Prof. Valentine Kabanets Lecture 20: N W Pseudorandom Generator November 25, 2004 Scribe: Ladan A. Mahabadi 1 Introduction In this last lecture of the course, we ll discuss the

More information

Impagliazzo s Hardcore Lemma

Impagliazzo s Hardcore Lemma Average Case Complexity February 8, 011 Impagliazzo s Hardcore Lemma ofessor: Valentine Kabanets Scribe: Hoda Akbari 1 Average-Case Hard Boolean Functions w.r.t. Circuits In this lecture, we are going

More information

PCP Theorem and Hardness of Approximation

PCP Theorem and Hardness of Approximation PCP Theorem and Hardness of Approximation An Introduction Lee Carraher and Ryan McGovern Department of Computer Science University of Cincinnati October 27, 2003 Introduction Assuming NP P, there are many

More information

Lecture 21: P vs BPP 2

Lecture 21: P vs BPP 2 Advanced Complexity Theory Spring 206 Prof. Dana Moshkovitz Lecture 2: P vs BPP 2 Overview In the previous lecture, we began our discussion of pseudorandomness. We presented the Blum- Micali definition

More information

Locally Decodable Codes

Locally Decodable Codes Foundations and Trends R in sample Vol. xx, No xx (xxxx) 1 114 c xxxx xxxxxxxxx DOI: xxxxxx Locally Decodable Codes Sergey Yekhanin 1 1 Microsoft Research Silicon Valley, 1065 La Avenida, Mountain View,

More information

6.895 PCP and Hardness of Approximation MIT, Fall Lecture 3: Coding Theory

6.895 PCP and Hardness of Approximation MIT, Fall Lecture 3: Coding Theory 6895 PCP and Hardness of Approximation MIT, Fall 2010 Lecture 3: Coding Theory Lecturer: Dana Moshkovitz Scribe: Michael Forbes and Dana Moshkovitz 1 Motivation In the course we will make heavy use of

More information

List Decoding of Reed Solomon Codes

List Decoding of Reed Solomon Codes List Decoding of Reed Solomon Codes p. 1/30 List Decoding of Reed Solomon Codes Madhu Sudan MIT CSAIL Background: Reliable Transmission of Information List Decoding of Reed Solomon Codes p. 2/30 List Decoding

More information

Notes for the Hong Kong Lectures on Algorithmic Coding Theory. Luca Trevisan. January 7, 2007

Notes for the Hong Kong Lectures on Algorithmic Coding Theory. Luca Trevisan. January 7, 2007 Notes for the Hong Kong Lectures on Algorithmic Coding Theory Luca Trevisan January 7, 2007 These notes are excerpted from the paper Some Applications of Coding Theory in Computational Complexity [Tre04].

More information

Lecture 9: List decoding Reed-Solomon and Folded Reed-Solomon codes

Lecture 9: List decoding Reed-Solomon and Folded Reed-Solomon codes Lecture 9: List decoding Reed-Solomon and Folded Reed-Solomon codes Error-Correcting Codes (Spring 2016) Rutgers University Swastik Kopparty Scribes: John Kim and Pat Devlin 1 List decoding review Definition

More information

Lecture 03: Polynomial Based Codes

Lecture 03: Polynomial Based Codes Lecture 03: Polynomial Based Codes Error-Correcting Codes (Spring 016) Rutgers University Swastik Kopparty Scribes: Ross Berkowitz & Amey Bhangale 1 Reed-Solomon Codes Reed Solomon codes are large alphabet

More information

Noisy Interpolating Sets for Low Degree Polynomials

Noisy Interpolating Sets for Low Degree Polynomials Noisy Interpolating Sets for Low Degree Polynomials Zeev Dvir Amir Shpilka Abstract A Noisy Interpolating Set (NIS) for degree d polynomials is a set S F n, where F is a finite field, such that any degree

More information

for some error exponent E( R) as a function R,

for some error exponent E( R) as a function R, . Capacity-achieving codes via Forney concatenation Shannon s Noisy Channel Theorem assures us the existence of capacity-achieving codes. However, exhaustive search for the code has double-exponential

More information

6.842 Randomness and Computation Lecture 5

6.842 Randomness and Computation Lecture 5 6.842 Randomness and Computation 2012-02-22 Lecture 5 Lecturer: Ronitt Rubinfeld Scribe: Michael Forbes 1 Overview Today we will define the notion of a pairwise independent hash function, and discuss its

More information

Lecture 3 Small bias with respect to linear tests

Lecture 3 Small bias with respect to linear tests 03683170: Expanders, Pseudorandomness and Derandomization 3/04/16 Lecture 3 Small bias with respect to linear tests Amnon Ta-Shma and Dean Doron 1 The Fourier expansion 1.1 Over general domains Let G be

More information

compare to comparison and pointer based sorting, binary trees

compare to comparison and pointer based sorting, binary trees Admin Hashing Dictionaries Model Operations. makeset, insert, delete, find keys are integers in M = {1,..., m} (so assume machine word size, or unit time, is log m) can store in array of size M using power:

More information

List Decoding in Average-Case Complexity and Pseudorandomness

List Decoding in Average-Case Complexity and Pseudorandomness List Decoding in Average-Case Complexity and Pseudorandomness Venkatesan Guruswami Department of Computer Science and Engineering University of Washington Seattle, WA, U.S.A. Email: venkat@cs.washington.edu

More information

Near-Optimal Secret Sharing and Error Correcting Codes in AC 0

Near-Optimal Secret Sharing and Error Correcting Codes in AC 0 Near-Optimal Secret Sharing and Error Correcting Codes in AC 0 Kuan Cheng Yuval Ishai Xin Li December 18, 2017 Abstract We study the question of minimizing the computational complexity of (robust) secret

More information

A list-decodable code with local encoding and decoding

A list-decodable code with local encoding and decoding A list-decodable code with local encoding and decoding Marius Zimand Towson University Department of Computer and Information Sciences Baltimore, MD http://triton.towson.edu/ mzimand Abstract For arbitrary

More information

Lecture 6. k+1 n, wherein n =, is defined for a given

Lecture 6. k+1 n, wherein n =, is defined for a given (67611) Advanced Topics in Complexity: PCP Theory November 24, 2004 Lecturer: Irit Dinur Lecture 6 Scribe: Sharon Peri Abstract In this lecture we continue our discussion of locally testable and locally

More information

Locally Decodable Codes

Locally Decodable Codes Foundations and Trends R in sample Vol. xx, No xx (xxxx) 1 98 c xxxx xxxxxxxxx DOI: xxxxxx Locally Decodable Codes Sergey Yekhanin 1 1 Microsoft Research Silicon Valley, 1065 La Avenida, Mountain View,

More information

Orthogonal Arrays & Codes

Orthogonal Arrays & Codes Orthogonal Arrays & Codes Orthogonal Arrays - Redux An orthogonal array of strength t, a t-(v,k,λ)-oa, is a λv t x k array of v symbols, such that in any t columns of the array every one of the possible

More information

6.895 Randomness and Computation March 19, Lecture Last Lecture: Boosting Weak Learners Into Strong Learners

6.895 Randomness and Computation March 19, Lecture Last Lecture: Boosting Weak Learners Into Strong Learners 6.895 Randomness and Computation March 9, 2008 Lecture 3 Lecturer: Ronitt Rubinfeld Scribe: Edwin Chen Overview. Last Lecture: Boosting Weak Learners Into Strong Learners In the last two lectures, we showed

More information

Noisy Interpolating Sets for Low Degree Polynomials

Noisy Interpolating Sets for Low Degree Polynomials Noisy Interpolating Sets for Low Degree Polynomials Zeev Dvir Amir Shpilka Abstract A Noisy Interpolating Set (NIS) for degree-d polynomials is a set S F n, where F is a finite field, such that any degree-d

More information

The idea is that if we restrict our attention to any k positions in x, no matter how many times we

The idea is that if we restrict our attention to any k positions in x, no matter how many times we k-wise Independence and -biased k-wise Indepedence February 0, 999 Scribe: Felix Wu Denitions Consider a distribution D on n bits x x x n. D is k-wise independent i for all sets of k indices S fi ;:::;i

More information

Reed-Solomon codes. Chapter Linear codes over finite fields

Reed-Solomon codes. Chapter Linear codes over finite fields Chapter 8 Reed-Solomon codes In the previous chapter we discussed the properties of finite fields, and showed that there exists an essentially unique finite field F q with q = p m elements for any prime

More information

2 Completing the Hardness of approximation of Set Cover

2 Completing the Hardness of approximation of Set Cover CSE 533: The PCP Theorem and Hardness of Approximation (Autumn 2005) Lecture 15: Set Cover hardness and testing Long Codes Nov. 21, 2005 Lecturer: Venkat Guruswami Scribe: Atri Rudra 1 Recap We will first

More information

HARDNESS AMPLIFICATION VIA SPACE-EFFICIENT DIRECT PRODUCTS

HARDNESS AMPLIFICATION VIA SPACE-EFFICIENT DIRECT PRODUCTS HARDNESS AMPLIFICATION VIA SPACE-EFFICIENT DIRECT PRODUCTS Venkatesan Guruswami and Valentine Kabanets Abstract. We prove a version of the derandomized Direct Product lemma for deterministic space-bounded

More information

The Tensor Product of Two Codes is Not Necessarily Robustly Testable

The Tensor Product of Two Codes is Not Necessarily Robustly Testable The Tensor Product of Two Codes is Not Necessarily Robustly Testable Paul Valiant Massachusetts Institute of Technology pvaliant@mit.edu Abstract. There has been significant interest lately in the task

More information

Two Query PCP with Sub-Constant Error

Two Query PCP with Sub-Constant Error Electronic Colloquium on Computational Complexity, Report No 71 (2008) Two Query PCP with Sub-Constant Error Dana Moshkovitz Ran Raz July 28, 2008 Abstract We show that the N P-Complete language 3SAT has

More information

Chapter 6 Reed-Solomon Codes. 6.1 Finite Field Algebra 6.2 Reed-Solomon Codes 6.3 Syndrome Based Decoding 6.4 Curve-Fitting Based Decoding

Chapter 6 Reed-Solomon Codes. 6.1 Finite Field Algebra 6.2 Reed-Solomon Codes 6.3 Syndrome Based Decoding 6.4 Curve-Fitting Based Decoding Chapter 6 Reed-Solomon Codes 6. Finite Field Algebra 6. Reed-Solomon Codes 6.3 Syndrome Based Decoding 6.4 Curve-Fitting Based Decoding 6. Finite Field Algebra Nonbinary codes: message and codeword symbols

More information

Lecture 2: Minimax theorem, Impagliazzo Hard Core Lemma

Lecture 2: Minimax theorem, Impagliazzo Hard Core Lemma Lecture 2: Minimax theorem, Impagliazzo Hard Core Lemma Topics in Pseudorandomness and Complexity Theory (Spring 207) Rutgers University Swastik Kopparty Scribe: Cole Franks Zero-sum games are two player

More information

Error Detection and Correction: Hamming Code; Reed-Muller Code

Error Detection and Correction: Hamming Code; Reed-Muller Code Error Detection and Correction: Hamming Code; Reed-Muller Code Greg Plaxton Theory in Programming Practice, Spring 2005 Department of Computer Science University of Texas at Austin Hamming Code: Motivation

More information

Cell-Probe Lower Bounds for Prefix Sums and Matching Brackets

Cell-Probe Lower Bounds for Prefix Sums and Matching Brackets Cell-Probe Lower Bounds for Prefix Sums and Matching Brackets Emanuele Viola July 6, 2009 Abstract We prove that to store strings x {0, 1} n so that each prefix sum a.k.a. rank query Sumi := k i x k can

More information

18.5 Crossings and incidences

18.5 Crossings and incidences 18.5 Crossings and incidences 257 The celebrated theorem due to P. Turán (1941) states: if a graph G has n vertices and has no k-clique then it has at most (1 1/(k 1)) n 2 /2 edges (see Theorem 4.8). Its

More information

Arthur-Merlin Streaming Complexity

Arthur-Merlin Streaming Complexity Weizmann Institute of Science Joint work with Ran Raz Data Streams The data stream model is an abstraction commonly used for algorithms that process network traffic using sublinear space. A data stream

More information

Lecture 24: Goldreich-Levin Hardcore Predicate. Goldreich-Levin Hardcore Predicate

Lecture 24: Goldreich-Levin Hardcore Predicate. Goldreich-Levin Hardcore Predicate Lecture 24: : Intuition A One-way Function: A function that is easy to compute but hard to invert (efficiently) Hardcore-Predicate: A secret bit that is hard to compute Theorem (Goldreich-Levin) If f :

More information

Low Rate Is Insufficient for Local Testability

Low Rate Is Insufficient for Local Testability Electronic Colloquium on Computational Complexity, Revision 2 of Report No. 4 (200) Low Rate Is Insufficient for Local Testability Eli Ben-Sasson Michael Viderman Computer Science Department Technion Israel

More information

Algebraic Soft-Decision Decoding of Reed-Solomon Codes Using Bit-level Soft Information

Algebraic Soft-Decision Decoding of Reed-Solomon Codes Using Bit-level Soft Information 1 Algebraic Soft-Decision Decoding of Reed-Solomon Codes Using Bit-level Soft Information arxiv:cs/0611090v [cs.it] 4 Aug 008 Jing Jiang and Krishna R. Narayanan Department of Electrical and Computer Engineering,

More information

Efficiently decodable codes for the binary deletion channel

Efficiently decodable codes for the binary deletion channel Efficiently decodable codes for the binary deletion channel Venkatesan Guruswami (venkatg@cs.cmu.edu) Ray Li * (rayyli@stanford.edu) Carnegie Mellon University August 18, 2017 V. Guruswami and R. Li (CMU)

More information

1 Distributional problems

1 Distributional problems CSCI 5170: Computational Complexity Lecture 6 The Chinese University of Hong Kong, Spring 2016 23 February 2016 The theory of NP-completeness has been applied to explain why brute-force search is essentially

More information

3x + 1 (mod 5) x + 2 (mod 5)

3x + 1 (mod 5) x + 2 (mod 5) Today. Secret Sharing. Polynomials Polynomials. Secret Sharing. Share secret among n people. Secrecy: Any k 1 knows nothing. Roubustness: Any k knows secret. Efficient: minimize storage. A polynomial P(x)

More information

Permanent is hard to compute even on a good day

Permanent is hard to compute even on a good day Permanent is hard to compute even on a good day Yuval Filmus September 11, 2012 Abstract We give an exposition of Cai, Pavan and Sivakumar s result on the hardness of permanent. They show that assuming

More information

Lectures One Way Permutations, Goldreich Levin Theorem, Commitments

Lectures One Way Permutations, Goldreich Levin Theorem, Commitments Lectures 11 12 - One Way Permutations, Goldreich Levin Theorem, Commitments Boaz Barak March 10, 2010 From time immemorial, humanity has gotten frequent, often cruel, reminders that many things are easier

More information

Problem Set 2. Assigned: Mon. November. 23, 2015

Problem Set 2. Assigned: Mon. November. 23, 2015 Pseudorandomness Prof. Salil Vadhan Problem Set 2 Assigned: Mon. November. 23, 2015 Chi-Ning Chou Index Problem Progress 1 SchwartzZippel lemma 1/1 2 Robustness of the model 1/1 3 Zero error versus 1-sided

More information

Lecture notes on OPP algorithms [Preliminary Draft]

Lecture notes on OPP algorithms [Preliminary Draft] Lecture notes on OPP algorithms [Preliminary Draft] Jesper Nederlof June 13, 2016 These lecture notes were quickly assembled and probably contain many errors. Use at your own risk! Moreover, especially

More information

Approximate List-Decoding of Direct Product Codes and Uniform Hardness Amplification

Approximate List-Decoding of Direct Product Codes and Uniform Hardness Amplification Approximate List-Decoding of Direct Product Codes and Uniform Hardness Amplification Russell Impagliazzo University of California, San Diego La Jolla, CA, USA russell@cs.ucsd.edu Valentine Kabanets Simon

More information

Lecture Examples of problems which have randomized algorithms

Lecture Examples of problems which have randomized algorithms 6.841 Advanced Complexity Theory March 9, 2009 Lecture 10 Lecturer: Madhu Sudan Scribe: Asilata Bapat Meeting to talk about final projects on Wednesday, 11 March 2009, from 5pm to 7pm. Location: TBA. Includes

More information

MATH3302 Coding Theory Problem Set The following ISBN was received with a smudge. What is the missing digit? x9139 9

MATH3302 Coding Theory Problem Set The following ISBN was received with a smudge. What is the missing digit? x9139 9 Problem Set 1 These questions are based on the material in Section 1: Introduction to coding theory. You do not need to submit your answers to any of these questions. 1. The following ISBN was received

More information

Lecture 3: Randomness in Computation

Lecture 3: Randomness in Computation Great Ideas in Theoretical Computer Science Summer 2013 Lecture 3: Randomness in Computation Lecturer: Kurt Mehlhorn & He Sun Randomness is one of basic resources and appears everywhere. In computer science,

More information

1 The Low-Degree Testing Assumption

1 The Low-Degree Testing Assumption Advanced Complexity Theory Spring 2016 Lecture 17: PCP with Polylogarithmic Queries and Sum Check Prof. Dana Moshkovitz Scribes: Dana Moshkovitz & Michael Forbes Scribe Date: Fall 2010 In this lecture

More information

Bounding the number of affine roots

Bounding the number of affine roots with applications in reliable and secure communication Inaugural Lecture, Aalborg University, August 11110, 11111100000 with applications in reliable and secure communication Polynomials: F (X ) = 2X 2

More information

Chapter 2: Source coding

Chapter 2: Source coding Chapter 2: meghdadi@ensil.unilim.fr University of Limoges Chapter 2: Entropy of Markov Source Chapter 2: Entropy of Markov Source Markov model for information sources Given the present, the future is independent

More information

Randomness and Computation March 13, Lecture 3

Randomness and Computation March 13, Lecture 3 0368.4163 Randomness and Computation March 13, 2009 Lecture 3 Lecturer: Ronitt Rubinfeld Scribe: Roza Pogalnikova and Yaron Orenstein Announcements Homework 1 is released, due 25/03. Lecture Plan 1. Do

More information

Lecture B04 : Linear codes and singleton bound

Lecture B04 : Linear codes and singleton bound IITM-CS6845: Theory Toolkit February 1, 2012 Lecture B04 : Linear codes and singleton bound Lecturer: Jayalal Sarma Scribe: T Devanathan We start by proving a generalization of Hamming Bound, which we

More information

Lecture 8 (Notes) 1. The book Computational Complexity: A Modern Approach by Sanjeev Arora and Boaz Barak;

Lecture 8 (Notes) 1. The book Computational Complexity: A Modern Approach by Sanjeev Arora and Boaz Barak; Topics in Theoretical Computer Science April 18, 2016 Lecturer: Ola Svensson Lecture 8 (Notes) Scribes: Ola Svensson Disclaimer: These notes were written for the lecturer only and may contain inconsistent

More information

Related-Key Statistical Cryptanalysis

Related-Key Statistical Cryptanalysis Related-Key Statistical Cryptanalysis Darakhshan J. Mir Department of Computer Science, Rutgers, The State University of New Jersey Poorvi L. Vora Department of Computer Science, George Washington University

More information

The BCH Bound. Background. Parity Check Matrix for BCH Code. Minimum Distance of Cyclic Codes

The BCH Bound. Background. Parity Check Matrix for BCH Code. Minimum Distance of Cyclic Codes S-723410 BCH and Reed-Solomon Codes 1 S-723410 BCH and Reed-Solomon Codes 3 Background The algebraic structure of linear codes and, in particular, cyclic linear codes, enables efficient encoding and decoding

More information

Lecture 2 Linear Codes

Lecture 2 Linear Codes Lecture 2 Linear Codes 2.1. Linear Codes From now on we want to identify the alphabet Σ with a finite field F q. For general codes, introduced in the last section, the description is hard. For a code of

More information

CS151 Complexity Theory. Lecture 14 May 17, 2017

CS151 Complexity Theory. Lecture 14 May 17, 2017 CS151 Complexity Theory Lecture 14 May 17, 2017 IP = PSPACE Theorem: (Shamir) IP = PSPACE Note: IP PSPACE enumerate all possible interactions, explicitly calculate acceptance probability interaction extremely

More information

Lecture 10 - MAC s continued, hash & MAC

Lecture 10 - MAC s continued, hash & MAC Lecture 10 - MAC s continued, hash & MAC Boaz Barak March 3, 2010 Reading: Boneh-Shoup chapters 7,8 The field GF(2 n ). A field F is a set with a multiplication ( ) and addition operations that satisfy

More information

COS598D Lecture 3 Pseudorandom generators from one-way functions

COS598D Lecture 3 Pseudorandom generators from one-way functions COS598D Lecture 3 Pseudorandom generators from one-way functions Scribe: Moritz Hardt, Srdjan Krstic February 22, 2008 In this lecture we prove the existence of pseudorandom-generators assuming that oneway

More information

Communications II Lecture 9: Error Correction Coding. Professor Kin K. Leung EEE and Computing Departments Imperial College London Copyright reserved

Communications II Lecture 9: Error Correction Coding. Professor Kin K. Leung EEE and Computing Departments Imperial College London Copyright reserved Communications II Lecture 9: Error Correction Coding Professor Kin K. Leung EEE and Computing Departments Imperial College London Copyright reserved Outline Introduction Linear block codes Decoding Hamming

More information

List-Decodable Codes

List-Decodable Codes 5 List-Decodable Codes The field of coding theory is motivated by the problem of communicating reliably over noisy channels where the data sent over the channel may come out corrupted on the other end,

More information

Notes for Lecture 18

Notes for Lecture 18 U.C. Berkeley Handout N18 CS294: Pseudorandomness and Combinatorial Constructions November 1, 2005 Professor Luca Trevisan Scribe: Constantinos Daskalakis Notes for Lecture 18 1 Basic Definitions In the

More information

G Solution (10 points) Using elementary row operations, we transform the original generator matrix as follows.

G Solution (10 points) Using elementary row operations, we transform the original generator matrix as follows. EE 387 October 28, 2015 Algebraic Error-Control Codes Homework #4 Solutions Handout #24 1. LBC over GF(5). Let G be a nonsystematic generator matrix for a linear block code over GF(5). 2 4 2 2 4 4 G =

More information

Computational Complexity: A Modern Approach

Computational Complexity: A Modern Approach 1 Computational Complexity: A Modern Approach Draft of a book in preparation: Dated December 2004 Comments welcome! Sanjeev Arora Not to be reproduced or distributed without the author s permission I am

More information

x n k m(x) ) Codewords can be characterized by (and errors detected by): c(x) mod g(x) = 0 c(x)h(x) = 0 mod (x n 1)

x n k m(x) ) Codewords can be characterized by (and errors detected by): c(x) mod g(x) = 0 c(x)h(x) = 0 mod (x n 1) Cyclic codes: review EE 387, Notes 15, Handout #26 A cyclic code is a LBC such that every cyclic shift of a codeword is a codeword. A cyclic code has generator polynomial g(x) that is a divisor of every

More information

Goldreich-Levin Hardcore Predicate. Lecture 28: List Decoding Hadamard Code and Goldreich-L

Goldreich-Levin Hardcore Predicate. Lecture 28: List Decoding Hadamard Code and Goldreich-L Lecture 28: List Decoding Hadamard Code and Goldreich-Levin Hardcore Predicate Recall Let H : {0, 1} n {+1, 1} Let: L ε = {S : χ S agrees with H at (1/2 + ε) fraction of points} Given oracle access to

More information

Average Case Complexity

Average Case Complexity Average Case Complexity A fundamental question in NP-completeness theory is When, and in what sense, can an NP-complete problem be considered solvable in practice? In real life a problem often arises with

More information

CSCI-B609: A Theorist s Toolkit, Fall 2016 Oct 4. Theorem 1. A non-zero, univariate polynomial with degree d has at most d roots.

CSCI-B609: A Theorist s Toolkit, Fall 2016 Oct 4. Theorem 1. A non-zero, univariate polynomial with degree d has at most d roots. CSCI-B609: A Theorist s Toolkit, Fall 2016 Oct 4 Lecture 14: Schwartz-Zippel Lemma and Intro to Coding Theory Lecturer: Yuan Zhou Scribe: Haoyu Zhang 1 Roots of polynomials Theorem 1. A non-zero, univariate

More information

Error Correction Review

Error Correction Review Error Correction Review A single overall parity-check equation detects single errors. Hamming codes used m equations to correct one error in 2 m 1 bits. We can use nonbinary equations if we create symbols

More information

Notes for Lecture 3... x 4

Notes for Lecture 3... x 4 Stanford University CS254: Computational Complexity Notes 3 Luca Trevisan January 18, 2012 Notes for Lecture 3 In this lecture we introduce the computational model of boolean circuits and prove that polynomial

More information

Great Theoretical Ideas in Computer Science

Great Theoretical Ideas in Computer Science 15-251 Great Theoretical Ideas in Computer Science Polynomials, Lagrange, and Error-correction Lecture 23 (November 10, 2009) P(X) = X 3 X 2 + + X 1 + Definition: Recall: Fields A field F is a set together

More information

Lecture 3,4: Multiparty Computation

Lecture 3,4: Multiparty Computation CS 276 Cryptography January 26/28, 2016 Lecture 3,4: Multiparty Computation Instructor: Sanjam Garg Scribe: Joseph Hui 1 Constant-Round Multiparty Computation Last time we considered the GMW protocol,

More information

The Complexity of the Matroid-Greedoid Partition Problem

The Complexity of the Matroid-Greedoid Partition Problem The Complexity of the Matroid-Greedoid Partition Problem Vera Asodi and Christopher Umans Abstract We show that the maximum matroid-greedoid partition problem is NP-hard to approximate to within 1/2 +

More information

Pseudorandom Generators

Pseudorandom Generators 8 Pseudorandom Generators Great Ideas in Theoretical Computer Science Saarland University, Summer 2014 andomness is one of the fundamental computational resources and appears everywhere. In computer science,

More information

Basic Probabilistic Checking 3

Basic Probabilistic Checking 3 CS294: Probabilistically Checkable and Interactive Proofs February 21, 2017 Basic Probabilistic Checking 3 Instructor: Alessandro Chiesa & Igor Shinkar Scribe: Izaak Meckler Today we prove the following

More information

Efficient Probabilistically Checkable Debates

Efficient Probabilistically Checkable Debates Efficient Probabilistically Checkable Debates Andrew Drucker MIT Andrew Drucker MIT, Efficient Probabilistically Checkable Debates 1/53 Polynomial-time Debates Given: language L, string x; Player 1 argues

More information

Arrangements, matroids and codes

Arrangements, matroids and codes Arrangements, matroids and codes first lecture Ruud Pellikaan joint work with Relinde Jurrius ACAGM summer school Leuven Belgium, 18 July 2011 References 2/43 1. Codes, arrangements and matroids by Relinde

More information

Notes 3: Stochastic channels and noisy coding theorem bound. 1 Model of information communication and noisy channel

Notes 3: Stochastic channels and noisy coding theorem bound. 1 Model of information communication and noisy channel Introduction to Coding Theory CMU: Spring 2010 Notes 3: Stochastic channels and noisy coding theorem bound January 2010 Lecturer: Venkatesan Guruswami Scribe: Venkatesan Guruswami We now turn to the basic

More information

Locality in Coding Theory

Locality in Coding Theory Locality in Coding Theory Madhu Sudan Harvard April 9, 2016 Skoltech: Locality in Coding Theory 1 Error-Correcting Codes (Linear) Code CC FF qq nn. FF qq : Finite field with qq elements. nn block length

More information

Locally testable and Locally correctable Codes Approaching the Gilbert-Varshamov Bound

Locally testable and Locally correctable Codes Approaching the Gilbert-Varshamov Bound Electronic Colloquium on Computational Complexity, Report No. 1 (016 Locally testable and Locally correctable Codes Approaching the Gilbert-Varshamov Bound Sivakanth Gopi Swastik Kopparty Rafael Oliveira

More information

Reed-Solomon Error-correcting Codes

Reed-Solomon Error-correcting Codes The Deep Hole Problem Matt Keti (Advisor: Professor Daqing Wan) Department of Mathematics University of California, Irvine November 8, 2012 Humble Beginnings Preview of Topics 1 Humble Beginnings Problems

More information

Tutorial: Locally decodable codes. UT Austin

Tutorial: Locally decodable codes. UT Austin Tutorial: Locally decodable codes Anna Gál UT Austin Locally decodable codes Error correcting codes with extra property: Recover (any) one message bit, by reading only a small number of codeword bits.

More information