Combinatorial Aspects of. Error Correction and Property Testing

Size: px
Start display at page:

Download "Combinatorial Aspects of. Error Correction and Property Testing"

Transcription

1 The Raymond and Beverly Sackler Faculty of Exact Sciences Blavatnik School of Computer Science Combinatorial Aspects of Error Correction and Property Testing Thesis submitted for the degree Doctor of Philosophy by Amit Weinstein Under the supervision of Prof. Noga Alon Submitted to the Senate of Tel Aviv University August 2013

2

3 Acknowledgments This thesis is a result of hard work, but equally important, plenty of help from various people along the way. First and foremost, I would like to offer my deepest appreciation to my advisor, Professor Noga Alon. Noga has been a wonderful advisor and source of inspiration, sharing his vast knowledge, ideas, and fascinating research questions. Noga s guidance and support throughout the research were key to its fruitfulness. Thank you Noga for this unforgettable chapter in my life, which I will cherish greatly. I would like to thank my coauthors and research roommates: Noga Alon, Ido Ben-Eliezer, Sonny Ben-Shimon, Eric Blais, Asaf Ferber, Simi Haber, Rani Hod, Alon Naor, Uri Stav, and Yuichi Yoshida. Whether we solved a mathematical mystery together, or had numerous intriguing discussions throughout the days, I have learned a lot from them and enjoyed doing research by their side. Tel Aviv University and Google have provided me with the flexibility to divide my time between mathematical research and software engineering. I am thankful for this opportunity to pursue both, and their willingness to let me explore it my way. I would also like to thank the administrative staff of the School of Computer Science at Tel Aviv University, who were always supportive and helpful. My biggest gratitude is to my wife Lital, for her never-ending support along the way, which made everything much easier. Thank you for encouraging me and being there for me, in this roller coaster of mathematical research.

4

5 Abstract Given a family of Boolean functions, property testing considers the task of distinguishing between functions of the family and functions that are far from it, by querying the input function at as few locations as possible. Identifying the optimal number of needed queries for each family is the main research goal in this field. A recent modification of this question allows the algorithm to only perform random queries, or choose its queries from a polynomial set of random inputs. These modifications are called passive and active testing, respectively. Many of the property testing results in this thesis involve the family of partially symmetric functions, that is, functions which are invariant under reordering of all but a small fraction of their input variables. We provide insight to the query complexity of testing partially symmetric functions in the different testing scenarios, demonstrating a large gap between them. We additionally suggest a strong connection between the family of partially symmetric functions and functions for which isomorphism can be tested efficiently. A new tool called symmetric influence, which measures the probability that the reordering of a set of variables will modify the output of the function, is a key component in some of our proofs. Together with the connection to intersecting families, we provide a purely combinatorial proofs for our results, as well as to previously known results regarding juntas (that is, functions which depend on a small fraction of their input variables) which relied on Fourier analysis. Another field considered in this thesis is local correction, and error correcting codes in general. Unlike property testing, in error correction the given input is known to be close to being in the family, and our goal is to

6 vi recover a member of the family which is close to it. In local correction we are requested to return the corrected value at a specific input, where similarly to property testing, we are interested in the minimal number of queries required to perform this task. The distance parameter plays a crucial role in the analysis of local correction. We show several results for both small and constant distance parameters, indicating that most juntas and partially symmetric functions can be efficiently locally corrected even for constant distance. In the more general setting of error correcting codes, we assume the correction algorithm considers the entire input and therefore the queries model is no longer relevant. Here we are typically interested in whether there is enough information to properly correct the input, and the algorithm itself may be shown to exist using probabilistic methods (and hence is not given explicitly). This thesis considers a special case where there are several receivers each wishes to receive a potentially different message, and each has a different type of noise. The message itself is broadcasted to all users simultaneously, and the main research goal is to determine how much information can be sent in different scenarios.

7

8 viii

9 Contents Abstract iv 1 Introduction 1 2 Boolean Functions Juntas Partially symmetric functions Properties of symmetric influence Fourier representation Typical juntas and partially symmetric functions Property Testing via Intersecting Families Testing juntas Testing partial symmetry Analysis of Find-Asymmetric-Set Isomorphism testing of partially symmetric functions Efficient sampler for partially symmetric functions Discussion Active and Passive Testing Testing k-linear functions Proof of Lemma Testing partially symmetric functions Testing low degree polynomials Discussion

10 x CONTENTS 5 Local Correction Local correction of k-juntas Local correction with constant error rate Correcting juntas when ε = Correcting partially symmetric functions when ε = Discussion Simultaneous Communication in Noisy Channels Unions of disjoint cliques - outperforming convexity An alternative proof of Theorem 6.6 (sketch) Proof of Corollary A clique minus a clique - convexity is everything Two users, three letters the complete story Confusion graph with two edges The first confusion graph has a single edge Discussion Semi-Strong Coloring of Intersecting Hypergraphs General lower bounds Probabilistic upper bound Discussion Bibliography 117

11 Chapter 1 Introduction Combinatorics is a powerful area used in a large variety of fields. In this thesis, we explore how combinatorial tools can help research in two major topics. The first topic is property testing, where the goal is to efficiently verify that a given object satisfies some property, or indicate that it is far from satisfying it. The second topic is somewhat related, and yet quite different. Here the given object is guaranteed not to satisfy the property, but rather to be close to satisfying it, and the task is to recover an object which satisfies the property and is close to it (either locally or globally). Each of these topics is a large field on its own, with many interesting problems to consider. This thesis describes new results regarding several problems in each of them, demonstrating how combinatorial tools can be used. Property testing Property testing considers the following general problem: given a property P, identify the minimum number of queries required to determine with high probability whether an input has the property P or whether it is far from P. This question was first formalized by Rubinfeld and Sudan [63]. Definition 1.1 ([63]). Let P be a set of Boolean functions. An ε-tester for P is a randomized algorithm which queries an unknown function f : Z n 2 Z 2 on a small number of inputs and

12 2 Introduction Accepts with probability at least 2/3 when f P; Rejects with probability at least 2/3 when f is ε-far from P, where f is ε-far from P if dist(f, g) := 1 2 n {x Z n 2 f(x) g(x)} ε holds for every g P. Goldreich, Goldwasser, and Ron [46] extended the scope of this definition to graphs and other combinatorial objects. Since then, the field of property testing has been very active. For an overview of recent developments, we refer the reader to the surveys [61, 62] and the book [45]. A notable achievement in the field of property testing is the complete characterization of graph properties that are testable with a constant number of queries [7]. An ambitious open problem is obtaining a similar characterization for properties of Boolean functions. Recently there has been a lot of progress on the restriction of this question to properties that are closed under linear or affine transformations [17, 50]. Similarly, one might hope to settle this open problem for all properties of Boolean functions that are closed under relabeling of the input variables. Many such properties have already been looked at in the context of property testing. Some of these properties are linearity [26], being a dictator function [15, 58], a junta [19, 20, 29, 40], or a low-degree polynomial [9, 18, 49]. Isomorphism testing. An important sub-problem of this open question is function isomorphism testing. Given a Boolean function f, the f-isomorphism testing problem is to determine whether a function g is isomorphic to f that is, whether it is the same up to relabeling of the input variables or far from being so. This problem was first raised by Fischer et al. [40]. They observed that fully symmetric functions are trivially isomorphism testable with a constant number of queries. They also showed that every k-junta, that is every function which depends on at most k of the input variables, is isomorphism testable with poly(k) queries. A natural goal, and the focus of Chapter 3, is to characterize the set of functions for which isomorphism testing can be done with a constant number of queries. Recent research provided lower bounds and improved upper

13 3 bounds for isomorphism testing of juntas. However, juntas and symmetric functions essentially remained the only classes of functions known to be isomorphism testable with a constant number of queries. Partially symmetric functions. The following definition generalizes both these classes of functions, and plays an important role throughout this thesis, in both property testing and error correction. Definition 1.2 (Partially symmetric functions). For a subset J [n] := {1,..., n}, a function f : Z n 2 Z 2 is J-symmetric if permuting the labels of the variables of J does not change the function. Moreover, f is called t- symmetric if there exists J [n] of size at least t such that f is J-symmetric. Shannon first introduced partially symmetric functions as part of his investigation of the circuit complexity of Boolean functions [65]. He showed that while most functions require an exponential number of gates to compute, every partially symmetric function can be implemented much more efficiently. Research on the role of partial symmetry in the complexity of implementing functions in circuits, binary decision diagrams, and other models has remained active ever since [34, 56]. Chapter 2 includes more details on partially symmetric functions and their properties. An important tool in the analysis of such functions is the notion of symmetric influence, which measures how influencing a set of variables is, when randomly permuting its members. This notion is somewhat analogous to that of influence, which is often used in the analysis of juntas. Our main result in Chapter 3 is an efficient isomorphism testing algorithm of partially symmetric functions. We show that ε-testing isomorphism of any (n k)-partially symmetric function can be done with O(k log k/ε 2 ) queries (Theorem 3.1). Since k-juntas are in particular (n k)-symmetric, setting J to be the set of variables the function does not depend on, this provides a new proof for the best known isomorphism tester of k-juntas, except for the dependence on ε. The testing algorithm is divided into two main parts. The first part verifies the given function is indeed partially symmetric, or at least close to being so. Once we know the function is essentially partially symmetric, the

14 4 Introduction second part verifies we indeed have the function we expect. As a byproduct, we prove that the property of being (n k)-symmetric can be ε-tested using O( k log k ) queries (Theorem 3.2). ε ε Active and passive testing In the traditional definition of Property Testing (Definition 1.1) the algorithm is allowed to pick its queries in the entire set Z n 2. Balcan et al. [13] suggested to restrict the tester to take its queries from a smaller, typically random, subset of Z n 2. This model is called active testing, in resemblance to active learning (see, e.g., [31]). Active testing gets more difficult as the size of the set we can query from decreases, and the extreme case is when it is exactly the number of queries we perform (so the algorithm actually has no choice). This is known as passive testing, or testing from random examples, and was studied in [46, 51]. Typically we are interested in the case where the size of the set is polynomial. In their recent paper [13], Balcan et al. have shown that active and passive testing of dictator functions is as hard as learning them, and requires Θ(log n) queries (unlike the classic model, in which it can be done in a constant number of queries). The main result we present in Chapter 4 extends this result to k-linear functions, proving that passive and active testing of them requires Θ(k log n) queries, assuming k is not too large. Other classes of functions considered in Chapter 4 include juntas, partially symmetric functions, linear functions, and low degree polynomials. For linear functions we provide tight bounds on the query complexity in both active and passive models (which asymptotically differ). The analysis for low degree polynomials is less complete and the exact query complexity is given only for passive testing. In both these cases, the query complexity for passive testing is essentially equivalent to that of learning. For juntas and partially symmetric functions, we provide some lower and upper bounds for the different models. When k is constant, our analysis for both families is tight. Moreover, the family of partially symmetric functions is the first example for which the query complexities in all these models are asymptotically different.

15 5 Local correction Local correction of functions deals with the task of determining the value of a function at a requested input by reading its potentially erroneous evaluations in several other locations. More precisely, we care about locally correcting specific functions which are known up to isomorphism. The main goal is identifying the number of needed queries for this task, for a given function and distance parameter. For a permutation σ S n and a function f = f(x 1,..., x n ) : Z n 2 Z 2, let f σ denote an isomorphism of f given by f σ (x 1,..., x n ) = f(x σ(1),..., x σ(n) ). Question. Given a specific Boolean function f, what is the needed query complexity in order to correct an input function f at a requested input x, where f is ε-close to some (unknown) isomorphism f σ of f? This question can be seen as a special case of locally correctable codes (see, e.g., [72]). Each codeword would be the 2 n evaluations of an isomorphic copy of the input function, and thus the number of distinct codewords is at most n!, and we would like to correct any specific value of the given noisy codeword using as few queries as possible. The closeness parameter ε in the above question plays a crucial role in answering it. As we will see in Chapter 5, when ε is small enough, we can locally correct all juntas; however this does not hold for a constant ε. Moreover, the number of needed queries may differ substantially between different functions of the same family, even for a very small distance. Small distance. Our first focus in local correction is juntas, where we allow the distance parameter ε to be exponentially small in terms of k, the junta parameter. When ε < 2 k 3, the input function is guaranteed to be close to a unique k-junta, hence local correction is indeed feasible. In fact, a simple argument will show that we can locally correct k-juntas, or more generally any degree k polynomial, using O(2 k ) queries (even without knowing the identity of the specific instance in the family). A natural question to ask then is, can we locally correct using fewer queries? The answer to this question is not a definitive Yes nor No. If we consider all juntas, then one cannot improve the number of queries. That is,

16 6 Introduction there exists some k-juntas which requires 2 Ω(k) queries in order to be locally corrected, even when ε < 2 k 3 (Theorem 5.7). However, for most juntas, this is not the case. Most k-juntas can be locally corrected from this distance using O(k log k) queries (Theorem 5.8). The main requirement for a junta to be so efficiently locally correctable is having all of the k influencing variables significant. Naturally, if one of these variables rarely influences the output of the function, identifying this variable and correcting potential errors may be harder. Luckily, typical juntas satisfy this property, as we show in Chapter 2. Constant distance. Our second focus in local correction in this work deals with the case where the distance parameter ε is required to be constant. Which functions can be locally corrected in this case with a constant number of queries (which does not depend on the number of variables n)? We first revisit the case of juntas. As we just mentioned, most k-juntas can be locally corrected very efficiently using only O(k log k) queries. This indicates that the requirement for the distance parameter can be relaxed, as even when ε = k 2, we are not likely to encounter a single faulty evaluation of the function. However, here we are interested in a constant distance parameter, which also does not depend on k. In order to handle constant distance parameter, we need yet another requirement about juntas. Before we required each variable to be significantly influential, but now we also require the junta to be far from any non-trivial isomorphism of itself (that is, when changing the order of the junta variables between themselves). This requirement is needed to prevent us from incorrectly correcting to the wrong isomorphism of the given junta. Luckily again, this property holds for most juntas, and therefore we reach the following result. Almost every k-junta can be locally corrected from distance of ε = using O(k log 2 k) queries (Theorem 5.10). Notice that we pay an extra logarithmic factor compared to the correcting algorithm for smaller distance. The correction algorithm for juntas heavily relies on the structure of the junta, that is having a small number of special variables. The family of partially symmetric functions shares this special structure, with the difference

17 7 that the other variables may still influence, but symmetrically as a group. It is therefore natural to ask whether we can locally correct it in a similar manner. Indeed, we show that almost every (n k)-symmetric function can be locally corrected from distance of ε = using O(k log 2 k) queries (Theorem 5.11). Global correction Local correction and property testing typically focus on the number of queries needed to perform a task, either correcting or testing, given the input object. In contrast, questions in error correction deal with the amount of information we can send successfully, assuming some restriction on the amount of noise. The correction process can use all the information that was sent, rather than looking at a small portion of the input. Chapter 6 investigates a special case of error correction. A sender wishes to broadcast a message of length n over an alphabet to r users, where each user i, 1 i r should be able to receive one of m i possible messages. The broadcast channel has noise for each of the users (possibly different noise for different users), who cannot distinguish between some pairs of letters. The vector (m 1, m 2,..., m r ) (n) is said to be feasible if length n encoding and decoding schemes exist enabling every user to decode his message. A rate vector (R 1, R 2,..., R r ) is feasible if there exists a sequence of feasible vectors (m 1, m 2,..., m r ) (n) such that R i = lim n log 2 m i n, for all i. We determine the feasible rate vectors for several different scenarios and investigate some of their properties. An interesting case discussed is when one user can only distinguish between all the letters in a subset of the alphabet. Tight restrictions on the feasible rate vectors for some specific noise types for the other users are provided. The simplest non-trivial cases of two users and alphabet of size three are fully characterized. To this end a more general previously known result, to which we sketch an alternative proof, is used. This problem generalizes the study of the Shannon capacity of a graph, by considering more than a single user.

18 8 Introduction Semi-strong coloring of intersecting hypergraphs For any c 2, a c-strong coloring of the hypergraph G is an assignment of colors to the vertices of G such that for every edge e of G, the vertices of e are colored by at least min{c, e } distinct colors. The hypergraph G is t-intersecting if every two edges of G have at least t vertices in common. We ask: for fixed c 2 and t 1, what is the minimum number of colors that is sufficient to c-strong color any t-intersecting hypergraph? In Chapter 7 we try to answer the above question for some values of t and c and, more importantly, to describe the settings for which the question is still open. We show that when t c 2, no finite number of colors is sufficient to c-strong color all t-intersecting hypergraphs. It is still unknown whether a finite number of colors suffices for the same task when t = c 1 and c > 3. In the last case, when t c, we show using a probabilistic argument that a finite number of colors is sufficient to c-strong color all t-intersecting hypergraphs, but a large gap still remains between the best upper and lower bounds on this number. The topic of semi-strong coloring of intersecting hypergraphs came up in the study of isomorphism testing presented in Chapter 3. A common approach in such testing algorithms is that of implicit learning, where we randomly partition some domain and identify a small subset of special parts in the partition. The main obstacle is often to prove that when the function is far from satisfying the questioned property, no choice of a small number of special parts would fool the tester. Bibliographic notes Most of the research presented in this thesis is based on joint work with coauthors. Some of this work has already been published in technical reports, conference abstracts, and/or journal articles. Chapter 2 describes partially symmetric functions and the notion of symmetric influence and its properties, as presented in the joint work with Eric Blais and Yuichi Yoshida [25]. It further presents typical properties of both juntas and partially symmetric functions, needed for the results presented in

19 Chapter 5, which were proven in joint works with Noga Alon [11, 12]. The main results about testing function isomorphisms and the connection to intersecting families, which are presented in Chapter 3, are based on the same work with Blais and Yoshida [25]. The related research about semistrong coloring of intersecting hypergraphs was done in a followup work [24] and is presented in Chapter 7. The results about active and passive testing presented in Chapter 4 are based on a joint work with Noga Alon and Rani Hod [8]. Chapter 5 describes several results regarding local correction of juntas, both from constant error rate and exponentially small error rate, as well as results for local correction of partially symmetric functions. These results are based on joint works with Noga Alon [11, 12]. The research of simultaneous communication in noisy channels was first published in [70], and is presented here in Chapter 6. 9

20 10 Introduction

21 Chapter 2 Boolean Functions 2.1 Juntas Juntas are functions which depend on a relatively small number of variables. Therefore we often consider their concise representation over these variables only. Given a k-junta f, we denote the core of f by f core : Z k 2 Z 2, which is the function f restricted to its k junta variables in their natural order. A variable of a function is said to be influencing if modifying its value can modify the output of the function. Clearly a k-junta has at most k influencing variables, which are in fact the junta variables (those which appear in its core). The following definition quantifies how influential a variable, or more generally a set of variables, is with respect to a given function. Definition 2.1 (Influence). The influence of the set J [n] of variables in the function f : Z n 2 Z 2 is Inf f (J) = Pr x,y [f(x) f(x J y J)], where x J y J is the vector z Z n 2 obtained by setting z i = y i for every i J and z i = x i for every i [n] \ J. When the set J = {i} is a singleton, we simply write Inf f (i). The notion of influence is a key tool in the analysis of juntas. Clearly a function f is a k-junta iff there exists a set J of n k variables such that

22 12 Boolean Functions Inf f (J) = 0. However, a closer look reveals that the influence of a set of variables J also provides an estimate on how close the function is to being a junta over the variables outside of J. The lower the influence of the set of variables J, it is more likely for the value of f to be determined solely by the variables outside of J. In particular, if a function f is ε-far from being a k-junta, then the influence of every set of at least n k variables has influence at least ε. The following lemma demonstrates two additional properties of influence, monotonicity and sub-additivity. Lemma 2.2 (Fischer et al. [40]). For every f : Z n 2 Z 2 and every J, K [n], Inf f (J) Inf f (J K) Inf f (J) + Inf f (K). Furthermore, if f is ε-far from k-juntas and J k, then Inf f (J) ε. 2.2 Partially symmetric functions A key element throughout this work is the family of partially symmetric functions, that is, functions invariant under any reordering of the variables of some set J [n]. Let S J denote the set of permutations of [n] which only move elements from the set J. A function f : Z n 2 Z 2 is J-symmetric if f(x) = f(πx) for every input x and a permutation π S J, where πx is the vector whose π(i)-th coordinate is x i. Unlike juntas, partially symmetric functions typically depend on all the variables of the input. However, there is a large set of variables which influence the function only according to their combined Hamming weight, and not according to the value of each coordinate. The concise representation of partially symmetric functions is therefore different from that of juntas. Let f : Z n 2 Z 2 be an (n k)-symmetric function. We define the core of f, denoted by f core : Z k 2 {0, 1,..., n k} Z 2, to be the function f restricted to its k asymmetric variables and the Hamming weight of the remaining variables. The notion of influence represents how much a variable, or a set of variables, can influence the output of the function when their value is modified.

23 2.2 Partially symmetric functions 13 For partially symmetric functions however, as typically all variables influence the output of the function, this is less useful. Instead, as the special variables are in fact the asymmetric variables of the function, we find the following definition of symmetric influence more useful. For a set of variables, the symmetric influence measures how invariant the function is to reordering of the elements in that set. Definition 2.3 (Symmetric influence). The symmetric influence of a set J [n] of variables in a Boolean function f : Z n 2 Z 2 is defined as SymInf f (J) = Pr [f(x) f(πx)]. x Z n 2,π S J It is not hard to see that in fact a function f is t-symmetric iff there exists a set J of size t such that SymInf f (J) = 0. A much stronger connection, however, exists between these properties, similarly to the connection between juntas and influence. In Section we will describe in more details additional properties of symmetric influence, which will demonstrate why symmetric influence is very much analogous to influence Properties of symmetric influence As we have just seen, symmetric influence can be used to define partially symmetric functions. However, the notion of symmetric influence also approximates the distance of a function from being partially symmetric. Lemma 2.4. Given a function f : Z n 2 Z 2 and a subset J [n], let f J be the J-symmetric function closest to f. Then, the symmetric influence of J satisfies dist(f, f J ) SymInf f (J) 2 dist(f, f J ). Proof. For every weight 0 w n and z Z J 2, define the layer L w := J z {x Z n 2 x = w x J = z} to be the vectors of Hamming weight w which identify with z over the set J (where L w = ( J J z w z ) if z w J + z or 0 otherwise). Let p w z [0, 1] be the fraction of the vectors in 2 Lw one J z has to modify in order to make the restriction of f over L w constant. J z

24 14 Boolean Functions With these notations, we can restate the definition of the symmetric influence of J as follows. SymInf f (J) = Pr [x L w ] x Z n J z 2 z w [f(x) f(πx) x L w ] J z Pr x Z n 2,π S J = 1 L w 2 n J z 2pw z (1 p w z ). z w This holds as in each such layer, the probability that x and πx would result in two different outcomes is the probability that x would be chosen out of the smaller part and πx from the complement, or vise versa. The function f J can be obtained by modifying f at p w z fraction of the inputs in each layer L w, as each layer can be addressed separately and we J z want to modify as few inputs as possible. By this observation, we have the following equality. dist(f, f J ) = 1 L w 2 n J z pw z. z But since 1 p w z [ 1 2, 1], we have that pw z 2p w z (1 p w z ) 2p w z and therefore dist(f, f J ) SymInf f (J) 2 dist(f, f J ) as required. Corollary 2.5. Let f : Z n 2 Z 2 be a function that is ε-far from being t- symmetric. Then for every set J [n] of size J t, SymInf f (J) ε holds. Proof. Fix J [n] of size J t and let g be a J-symmetric function closest to f. Since g is symmetric on any subset of J, it is in particular t-symmetric and therefore dist(f, g) ε as f is ε-far from being t-symmetric. Thus, by Lemma 2.4, SymInf f (J) dist(f, g) ε holds. Corollary 2.5 demonstrates the strong connection between symmetric influence and the distance from being partially symmetric, similar to the second part of Lemma 2.2 for influence and juntas. The additional properties of influence as appears in Lemma 2.2 are monotonicity and sub-additivity. The following lemmas show that the same properties (approximately) hold for symmetric influence. w

25 2.2 Partially symmetric functions 15 Lemma 2.6 (Monotonicity). For any function f : Z n 2 J K [n], SymInf f (J) SymInf f (K). Z 2 and any sets Proof. Fix a function f and two sets J, K [n] so that J K. We have seen before that the symmetric influence can be computed in layers, where each layer is determined by the Hamming weight and the elements outside the set we are considering. Using the fact that Var(X) = Pr[X = 0] Pr[X = 1], the symmetric influence is twice the expected variance over all the layers (considering also the size of the layers). Using the same notation as before, SymInf f (J) = 1 L w 2 2 Var[f(x) x n J z x Lw ] J z z w = 2 E [Var [f(x) x L y y x J y J ] A key observation is that since K J, the layers determined when considering J are a refinement of the layers determined when considering K. Together with the fact that Var(X) = Pr[X = 0] Pr[X = 1] is a concave function in the range [0, 1], we can apply Jensen s inequality on each layer before and after the refinement to get the desired inequality. More precisely, for every z Z K 2 and 0 w n, [ ] Var [f(x) x x Lw ] K z E Var [f(x) x y x Lw ] y L w. J y J K z Averaging this over all layers, we get the desired result. Lemma 2.7 (Weak sub-additivity). There is a universal constant c such that, for any constant 0 < γ < 1, a function f : Z n 2 Z 2, and sets J, K [n] of size at least (1 γ)n, SymInf f (J K) SymInf f (J) + SymInf f (K) + c γ. It might be tempting to think that strong sub-additivity holds, as in the standard notion of influence, however this is not the case. For example, consider the function f(x) = f 1 (x J ) + f 2 (x K ) for some partition [n] = J K and ].

26 16 Boolean Functions two randomly chosen symmetric functions f 1, f 2. Since f is far from symmetric, SymInf f ([n]) = SymInf f (J K) > 0 while SymInf f (J) = SymInf f (K) = 0. The additive factor of c γ in Lemma 2.7 is derived from the distance between the two distributions π J K x and π J π K x, for a random x Z n 2 and random permutations from S J K, S J, S K. When the sets J and K are large, the distance between these distributions is relatively small which therefore result in this weak sub-additivity property. The analysis of the lemma is done using hypergeometric distributions, and the distance between them. Let H n,m,k be the hypergeometric distribution obtained when we pick k balls out of n, m of which are red, and count the number of red balls we obtained. Let d TV (, ) denote the statistical distance between two distributions. The following two lemmas would be useful for our proof. Lemma 2.8. Let J, K [n] be two sets and π, π J, π K be permutations chosen uniformly at random from S J K, S J, S K, respectively. For a fixed x Z n 2, we define D πx and D πj π K x as the distribution of πx and π J π K x, respectively. Then, holds. d TV (D πx, D πj π K x) = d TV (H J K, xj K, K\J, H K, xk, K\J ) Lemma 2.9. Let n, m, n, m, k be non-negative integers with k, n γn for some γ 1. Suppose that m n t n and m n t n hold for some t Then, γ d TV (H n,m,k, H n n,m m,k) c 2.9 (1 + t)γ. holds for some universal constant c 2.9. We first show how these lemmas imply the proof of Lemma 2.7, and will afterwards prove them. Proof of Lemma 2.7. Let π, π J and π K be as in Lemma 2.8 and fix x Z n 2

27 2.2 Partially symmetric functions 17 to be some input. Pr π [f(x) f(πx)] Pr π J,π K [f(x) f(π J π K x)] + d TV (D πx, D πj π K x) Pr[f(x) f(π K x)] + Pr [f(π K x) f(π J π K x)] π K π J,π K + d TV (D πx, D πj π K x) By summing over all possible inputs x we have SymInf f (J K) = Pr [f(x) f(πx)] = 1 Pr[f(x) f(πx)] x,π 2 n π x SymInf f (J) + SymInf f (K) + 1 d 2 n TV (D πx, D πj π K x). By applying Lemma 2.8 over each input x, it suffices to show that 1 d 2 n TV (D πx, D πj π K x) = x 1 d 2 n TV (H J K, xj K, K\J, H K, xk, K\J ) c γ. x x (2.1) Ideally, we would like to apply Lemma 2.9 on every input x and get the desired result, however this is not possible as some inputs does not satisfy the requirements of the lemma. Therefore, we perform a slightly more careful analysis. Let us choose c 2 and assume γ 1 (as otherwise the claim 4 trivially holds). Fix γ = γ/(1 γ) 1 and t = We first note that γ regardless of x, the required conditions on the size of the sets hold. To be exact, J \ K γ J K and K \ J γ J K since J K (1 γ)n and J \ K K γn (and similarly K \ J γn). We say an input x is good if it satisfies the other conditions of Lemma 2.9. That is, both x J K J K 2 t J K and x J\K J\K 2 t J \ K hold. Otherwise we call such x bad. From the Chernoff bound and( the union bound, the probability that x is bad is at most 4 exp( 2t 2 ) 4 exp 1 c γ for some constant c (notice that γ 2γ). By applying Lemma 2.9 over the good inputs we get (2.1) c 2 n 2 n 2.9 (1 + t)γ c γ + c 2.9 (1 + t)γ c γ x:bad x:good 5000γ )

28 18 Boolean Functions for some constant c, as required. Proof of Lemma 2.8. Since both distributions D πx and D πj π K x only modify coordinates in J K, we can ignore all other coordinates. Moreover, it is in fact suffices to look only at the number of ones in the coordinates of K \ J and J K, which completely determines the distributions. Let D z denote the uniform distribution over all elements y Z n 2 such that y = x, y J K = x J K and y K\J = z (which also fixes the number of ones in y J ). Notice that this is well defined only for values of z such that max{0, x J K J } z min{ x J K, K \ J }. Given this notation, D πx can be looked at as choosing z H J K, xj K, K\J and returning y D z. This is because we apply a random permutation over all elements of J K, and therefore the number of ones inside K \ J is indeed distributed like z. Moreover, the order inside both sets K \ J and J is uniform. The distribution D πj π K x can be looked at as choosing z H K, xk, K\J and returning y D z. The number of ones in K \ J is determined already after applying π K. It is distributed like z as we care about the choice of K \ J out of the K elements, and x K of them are ones (and their order is uniform). Later, we apply a random permutation π J over all other relevant coordinates, so the order of elements in J is also uniform. Since the distributions D z are disjoint for different values of z, this implies that the distance between the two distributions D πx and D πj π K x depends only on the number of ones chosen to be inside K \ J. Therefore we have as required. d TV (D πx, D πj π K x) = d TV (H J K, xj K, K\J, H K, xk, K\J ) Proof of Lemma 2.9. Our proof uses the connection between hypergeometric distribution and the binomial distribution, which we denote by B n,p (for n experiments, each with success probability p). By the triangle inequality we know that d TV (H n,m,k, H n n,m m,k) d TV (H n,m,k, B k,p ) + d TV (B k,p, B k,p ) + d TV (B k,p, H n n,m m,k) (2.2)

29 2.2 Partially symmetric functions 19 where p = m and n p = m m. In order to bound the distances we just n n introduced, we use the following two lemmas. Lemma 2.10 (Example 1 in [67]). d TV (H n,m,k, B k,p ) k n holds for p = m n. Lemma 2.11 ([1]). Let 0 < p < 1 and 0 < δ < 1 p. Then, e τ n,p (δ) d TV (B n,p, B n,p+δ ) 2 (1 τ n,p (δ)) 2 n+2 provided τ n,p (δ) = δ < 1. 2p(1 p) Before using the above lemmas, we analyze some of the parameters. First, when k = 0 the lemma trivially holds and we therefore assume k 1. Notice that this implies that nγ k 1. The probability p is known to be relatively close to half. To be exact, p 1 t n/n 1 and therefore nγ < 6. Assume p p(1 p) p and let δ = p p (the other case can be treated in the same manner). We first bound δ as follows. δ = mn nm n(n n ) 1 n(n n ) = t(n n + nn ) n(n n ) ( (n ) ( )) n 2 + t n n n 2 t n γ n (from γ 1 2 ). 2t γn 3/2 (1 γ)n 2 4t Then, τ k,p (δ) in Lemma 2.11 can be bounded by γ k + 2 3γ(k + 2) τ k,p (δ) 4t n 2p(1 p) 4t n 12t γk/n 12tγ (from 1 k γn). (from 1 p(1 p) < 6) Note that, from the assumption, we have τ k,p (δ) 1. By Lemmas and 2.11, we have (2.2) k n + e 2 τ k,p (δ) (1 τ k,p (δ)) + k 2 n n 3γ + 2 e 12tγ (from τ k,p (δ) 1 2 ) c 2.9 (1 + t)γ for some universal constant c 2.9.

30 20 Boolean Functions Fourier representation When dealing with Boolean functions, their Fourier representation is often a key tool to enhance and simply proofs. In the field of property testing, this is also the case, and there are several best known proofs which rely on Fourier analysis. The strong connection between Fourier analysis and property testing of juntas may very well be rooted in the nice representation of influence in terms of the Fourier coefficients of a function. The new notion of Symmetric influence also has a nice representation using the Fourier coefficients of the function. Although this is not used in this work, we find it important to describe this connection in more details. When discussing Fourier representation of Boolean functions, it is easier to consider the range of the function as { 1, 1} rather than Z 2. In this setting, the Fourier basis in which we represent the function is the characteristic functions χ S : Z n 2 { 1, 1} for every S [n], defined by χ S (x) = ( 1) i S x i. We denote the Fourier decomposition of a Boolean function f by the sum f(x) = S [n] ˆf(S)χ S (x), where ˆf(S) is the Fourier coefficient of the character χ S in f. Using these notations, it is well known that the influence of a set of variables J can be computed by Inf f (J) = ˆf(S) 2. S [n], S J >0 The following proposition describes the connection between symmetric influence and the Fourier decomposition. Proposition Given a Boolean function f : Z n 2 { 1, 1} and a set J [n], the symmetric influence of J with respect to f can be computed as SymInf f (J) = 1 Var[ ˆf(πS)] 2 π S J where πs = {π(i) i S}. S [n] The proposition indicates that the symmetric influence of any set J can be computed as a function of the variance of the Fourier coefficients of the

31 2.2 Partially symmetric functions 21 function in the different layers. Each layer here refer to all the Fourier coefficients of sets which share the intersection with [n] \ J and the intersection size with J, resulting in ( J + 1)2 n J different layers. The key to proving this proposition is the following basic result on linear functions. Lemma Fix J, S, T [n]. Then E [χ S (x) χ T (πx)] = x Z n 2,π S J {( J S J ) 1 if π S J, πs = T 0 otherwise. Proof. For any vector x Z n 2, any set S [n], and any permutation π S n, we have the identity χ S (πx) = χ π 1 S(x). So ] E [χ S (x) χ T (πx)] = E [χ S (x)χ π 1 x Z n 2,π S T (x)] = E [E[χ S (x)χ π 1 T (x)]. J x,π π x But E x [χ S (x)χ π 1 T (x)] = 1[S = π 1 T ], so we also have E [χ S (x) χ T (πx)] = Pr [S = π 1 T ] = Pr [πs = T ]. x Z n 2,π S J π S J π S J The identity πs = T holds iff the permutation π satisfies π(i) T for every i S. Since we only permute elements from J, the sets S and T must agree on the elements of [n] \ J. If this is not the case, or if the intersection of the sets with J is not of the same size, no such permutation exists. Otherwise, this event occurs if the elements of S J are mapped to the exact locations of T J. This holds for one out of the ( J S J ) possible sets of locations, each with equal probability. Proof of Proposition By appealing to the fact that f is { 1, 1}-valued, we have that Pr [f(x) f(πx)] = 1 x,π 4 E [f(x) 2 + f(πx) 2 2f(x)f(πx)]. x,π Applying linearity of expectation and Parseval s identity, we obtain E [f(x) 2 + f(πx) 2 2f(x)f(πx)] = x,π 2 ˆf(S) 2 2 ˆf(S) ˆf(T ) E [χ S (x)χ T (πx)]. x,π S [n] S,T [n]

32 22 Boolean Functions Fix any S [n]. By Lemma 2.13, T [n] ˆf(T ) E x,π [χ S (x)χ T (πx)] = π SJ ˆf(πS) ) = E [ ˆf(πS)]. π SJ ( J S J Given this equality, ˆf(S) ˆf(T ) E [χ S (x)χ T (πx)] = x,π S S,T [n] ˆf(S) E π SJ [ ˆf(πS)]. By applying some elementary manipulation, we now get Pr [f(x) f(πx)] = 1 x,π 2 S = 1 2 = 1 2 S S ( ) ˆf(S) ˆf(S) E[ ˆf(πS)] = π (E π [ ˆf(πS) 2 ] E π [ ˆf(πS)] 2 ) = Var π [ ˆf(πS)]. 2.3 Typical juntas and partially symmetric functions Some of the local error correction results we present in this thesis are only applicable to most juntas and partially symmetric functions. In this section we describe some of the typical properties of both families of functions, properties which we rely on in our algorithms. We start with two properties of juntas. The first, presented in Proposition 2.14, indicates that in a typical junta every influencing variable has constant influence. The second property bounds the distance between a typical junta and its isomorphisms, and is presented in Proposition Notice that in both propositions, it suffices to consider the core of the junta rather than the entire function.

33 2.3 Typical juntas and partially symmetric functions 23 Proposition Let f : Z k 2 Z 2 be a random Boolean function over k variables. Then with probability at least 1 2 Ω(k), any variable i [k] out of the k variables of f has influence Inf f (i) > 0.1. Proof. Let f be a random function over k variables. The influence of some variable i is determined by the number of pairs of inputs, which differ only on the coordinate i, that disagree on the output. Since each output of the function is chosen independently, and as these 2 k 1 pairs are disjoint, this is in fact a binomial random variable. The influence of variable i is less than 0.1 only if at most 1/5 of these pairs disagree. This probability is thus Pr[B(2 k 1, 0.5) < 1 5 2k 1 ] < 2 c2k for some absolute constant c > 0, where here B is the binomial distribution and we applied one of the standard estimates for binomial distributions (cf., e.g. [10], Appendix A). Therefore, by the union bound, all k variables have influence greater than 0.1 with probability 1 2 Ω(k). Proposition Let f : Z k 2 Z 2 be a random Boolean function over k variables. Then with probability at least 1 2 Ω(k), f is 0.1-far from any non-trivial isomorphic function. Proof. Let f be a random function of k variables and let π S k be any nontrivial permutation. Our goal is to show that Pr x [f(x) f π (x)] > 0.1 for every such π, with high probability (where the probability is over the choice of f and applies to all permutations simultaneously). We will do a similar calculation to the one above. Here however, we do not have a nice partition of the inputs into disjoint pairs, as some inputs remain unchanged after applying the permutation. Consider the partition of the inputs according to π into chains, that is elements x, πx, π 2 x,..., π i x = x. Notice that if π is not the identity, there are at most 2 k /2 chains of length 1 (half of the elements). Looking at the elements of a chain of length i 2, i 1 i/2 of them result in pairs x, πx so that all these events f(x) f(πx) are mutually independent. Thus in total we have at least 2 k /4 independent samples. As before, we can now bound the probability that Pr x [f(x) f π (x)] < 0.1 by the probability that at most 2/5 of these pairs would disagree (as

34 24 Boolean Functions with probability at least 1/4 we fall into an element from our independent samples). We bound this probability by Pr[B(2 k /4, 0.5) < 2 5 2k 4 ] < 2 c 2 k for some absolute constant c > 0, where again we applied one of the standard estimates for binomial distributions. Since there are only k! 1 non-trivial permutations, we can apply the union bound and conclude that over the choice of f, it is 0.1-far from all its non-trivial isomorphic copies with probability at least 1 2 Ω(k). Properties of partially symmetric functions Similarly to the above typical properties of juntas, we present here two analogous properties of partially symmetric functions that typically hold. The first bounds the symmetric influence of sets containing at least one asymmetric and one symmetric variable, presented in Proposition The second, described in Proposition 2.17, deals with the distance between such a function and its non-trivial isomorphisms. Proposition Let f : Z n 2 Z 2 be a random (n k)-symmetric function for some k < n (uniformly chosen over all such functions). Then with probability at least 1 2 Ω( n), any asymmetric variable i and any symmetric variable j have symmetric influence SymInf f ({i, j}) > 0.1. Proof. Let f be a random (n k)-symmetric function and assume without loss of generality that its asymmetric variables are the first k variables. We arbitrarily choose the asymmetric variable x k and the symmetric variable x k+1. Let x be a random vector and apply a random permutation on x k and x k+1. Notice that only with probability 1/4 we might see two different outputs of f, as with probability 3/4 either x k = x k+1, or the chosen permutation is the identity. We therefore restrict ourselves to such inputs where x k x k+1 and the permutation transposes x k and x k+1. We consider the partition of inputs to f core into 2 k 1 (n k 1) pairs according to the value of x 1,, x k 1 and the Hamming weight of the variables x k+2,..., x n. Notice that since we require x k to be different from x k+1, for

35 2.3 Typical juntas and partially symmetric functions 25 each such restriction we have precisely two inputs to the core. Moreover, the transposition of x k and x k+1 only swaps between the two inputs of the same pair, meaning the pairs are independent. Define m = n k 2 and for every z Z k 1 2 and w {0, 1,..., m} let X z,w be the indicator random variable of the event that f core agrees on the corresponding pair of inputs. Using these definitions, we can compute the symmetric influence of k and k+1 as follows. SymInf f ({k, k + 1}) = ( m m w) 8 2 k 1 2 m ( 1)Xz,w. z Z k 1 2 To bound the deviation of the symmetric influence, we additionally define for each z and w the random variable Y z,w = ( m w) 2 (k 1) 2 m ( 1) Xz,w. The accumulated sum over Y z,w for every z and w is a martingale. When we add a specific Y z,w to the sum, we modify it by ( m w) 2 (k 1) 2 m in absolute value. Therefore, by the Azuma-Hoeffding inequality, Pr[SymInf f ({k, k + 1}) < 0.1] Pr[ 8 SymInf f ({k, k + 1}) 1 > 1] 5 ( ) w= exp 2 z,w Y z,w 2 ( ) 2 2(k 1) 2 2m = 2 exp ( k 1 m ) 2 w w ( ) = 2 exp 2k 100 ( 22m 2m ) m 2 exp ( 2k 100 πm ) = 2 Ω(2k n k), where we used the known fact m ( m ) 2 ( i=0 i = 2m ) m 2 2m πm. In order to complete the proof, we apply the union bound over the k possible choices for the asymmetric variable. Notice that for the symmetric variables it suffices to consider a single choice, as they are symmetric. Thus the probability that any such symmetric influence would be smaller than 0.1 is bounded by k 2 Ω(2k n k) = 2 Ω( n) as required. Proposition Let f : Z n 2 Z 2 be a random (n k)-symmetric function for some k < n, and let J be the set of its asymmetric variables. Then with

Downloaded 06/01/15 to Redistribution subject to SIAM license or copyright; see

Downloaded 06/01/15 to Redistribution subject to SIAM license or copyright; see SIAM J. COMPUT. Vol. 44, No. 2, pp. 411 432 c 2015 Society for Industrial and Applied Mathematics PARTIALLY SYMMETRIC FUNCTIONS ARE EFFICIENTLY ISOMORPHISM TESTABLE ERIC BLAIS, AMIT WEINSTEIN, AND YUICHI

More information

The Analysis of Partially Symmetric Functions

The Analysis of Partially Symmetric Functions The Analysis of Partially Symmetric Functions Eric Blais Based on joint work with Amit Weinstein Yuichi Yoshida Classes of simple functions Classes of simple functions Constant Classes of simple functions

More information

A Combinatorial Characterization of the Testable Graph Properties: It s All About Regularity

A Combinatorial Characterization of the Testable Graph Properties: It s All About Regularity A Combinatorial Characterization of the Testable Graph Properties: It s All About Regularity Noga Alon Eldar Fischer Ilan Newman Asaf Shapira Abstract A common thread in all the recent results concerning

More information

Lecture 22. m n c (k) i,j x i x j = c (k) k=1

Lecture 22. m n c (k) i,j x i x j = c (k) k=1 Notes on Complexity Theory Last updated: June, 2014 Jonathan Katz Lecture 22 1 N P PCP(poly, 1) We show here a probabilistically checkable proof for N P in which the verifier reads only a constant number

More information

DISTINGUISHING PARTITIONS AND ASYMMETRIC UNIFORM HYPERGRAPHS

DISTINGUISHING PARTITIONS AND ASYMMETRIC UNIFORM HYPERGRAPHS DISTINGUISHING PARTITIONS AND ASYMMETRIC UNIFORM HYPERGRAPHS M. N. ELLINGHAM AND JUSTIN Z. SCHROEDER In memory of Mike Albertson. Abstract. A distinguishing partition for an action of a group Γ on a set

More information

Efficient testing of large graphs

Efficient testing of large graphs Efficient testing of large graphs Noga Alon Eldar Fischer Michael Krivelevich Mario Szegedy Abstract Let P be a property of graphs. An ɛ-test for P is a randomized algorithm which, given the ability to

More information

Learning and Fourier Analysis

Learning and Fourier Analysis Learning and Fourier Analysis Grigory Yaroslavtsev http://grigory.us CIS 625: Computational Learning Theory Fourier Analysis and Learning Powerful tool for PAC-style learning under uniform distribution

More information

CSE 291: Fourier analysis Chapter 2: Social choice theory

CSE 291: Fourier analysis Chapter 2: Social choice theory CSE 91: Fourier analysis Chapter : Social choice theory 1 Basic definitions We can view a boolean function f : { 1, 1} n { 1, 1} as a means to aggregate votes in a -outcome election. Common examples are:

More information

Lecture 29: Computational Learning Theory

Lecture 29: Computational Learning Theory CS 710: Complexity Theory 5/4/2010 Lecture 29: Computational Learning Theory Instructor: Dieter van Melkebeek Scribe: Dmitri Svetlov and Jake Rosin Today we will provide a brief introduction to computational

More information

The concentration of the chromatic number of random graphs

The concentration of the chromatic number of random graphs The concentration of the chromatic number of random graphs Noga Alon Michael Krivelevich Abstract We prove that for every constant δ > 0 the chromatic number of the random graph G(n, p) with p = n 1/2

More information

A Characterization of the (natural) Graph Properties Testable with One-Sided Error

A Characterization of the (natural) Graph Properties Testable with One-Sided Error A Characterization of the (natural) Graph Properties Testable with One-Sided Error Noga Alon Asaf Shapira Abstract The problem of characterizing all the testable graph properties is considered by many

More information

Lecture 4: Proof of Shannon s theorem and an explicit code

Lecture 4: Proof of Shannon s theorem and an explicit code CSE 533: Error-Correcting Codes (Autumn 006 Lecture 4: Proof of Shannon s theorem and an explicit code October 11, 006 Lecturer: Venkatesan Guruswami Scribe: Atri Rudra 1 Overview Last lecture we stated

More information

Limits to List Decoding Random Codes

Limits to List Decoding Random Codes Limits to List Decoding Random Codes Atri Rudra Department of Computer Science and Engineering, University at Buffalo, The State University of New York, Buffalo, NY, 14620. atri@cse.buffalo.edu Abstract

More information

Every Monotone Graph Property is Testable

Every Monotone Graph Property is Testable Every Monotone Graph Property is Testable Noga Alon Asaf Shapira Abstract A graph property is called monotone if it is closed under removal of edges and vertices. Many monotone graph properties are some

More information

Nearly Tight Bounds for Testing Function Isomorphism

Nearly Tight Bounds for Testing Function Isomorphism Nearly Tight Bounds for Testing Function Isomorphism Sourav Chakraborty David García-Soriano Arie Matsliah Abstract We study the problem of testing isomorphism (equivalence up to relabelling of the variables)

More information

Proclaiming Dictators and Juntas or Testing Boolean Formulae

Proclaiming Dictators and Juntas or Testing Boolean Formulae Proclaiming Dictators and Juntas or Testing Boolean Formulae Michal Parnas The Academic College of Tel-Aviv-Yaffo Tel-Aviv, ISRAEL michalp@mta.ac.il Dana Ron Department of EE Systems Tel-Aviv University

More information

Homomorphisms in Graph Property Testing - A Survey

Homomorphisms in Graph Property Testing - A Survey Homomorphisms in Graph Property Testing - A Survey Dedicated to Jaroslav Nešetřil on the occasion of his 60 th birthday Noga Alon Asaf Shapira Abstract Property-testers are fast randomized algorithms for

More information

Nonnegative k-sums, fractional covers, and probability of small deviations

Nonnegative k-sums, fractional covers, and probability of small deviations Nonnegative k-sums, fractional covers, and probability of small deviations Noga Alon Hao Huang Benny Sudakov Abstract More than twenty years ago, Manickam, Miklós, and Singhi conjectured that for any integers

More information

Lecture 4 Noisy Channel Coding

Lecture 4 Noisy Channel Coding Lecture 4 Noisy Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 9, 2015 1 / 56 I-Hsiang Wang IT Lecture 4 The Channel Coding Problem

More information

Tolerant Versus Intolerant Testing for Boolean Properties

Tolerant Versus Intolerant Testing for Boolean Properties Tolerant Versus Intolerant Testing for Boolean Properties Eldar Fischer Faculty of Computer Science Technion Israel Institute of Technology Technion City, Haifa 32000, Israel. eldar@cs.technion.ac.il Lance

More information

Notes 6 : First and second moment methods

Notes 6 : First and second moment methods Notes 6 : First and second moment methods Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Roc, Sections 2.1-2.3]. Recall: THM 6.1 (Markov s inequality) Let X be a non-negative

More information

Size and degree anti-ramsey numbers

Size and degree anti-ramsey numbers Size and degree anti-ramsey numbers Noga Alon Abstract A copy of a graph H in an edge colored graph G is called rainbow if all edges of H have distinct colors. The size anti-ramsey number of H, denoted

More information

Basic counting techniques. Periklis A. Papakonstantinou Rutgers Business School

Basic counting techniques. Periklis A. Papakonstantinou Rutgers Business School Basic counting techniques Periklis A. Papakonstantinou Rutgers Business School i LECTURE NOTES IN Elementary counting methods Periklis A. Papakonstantinou MSIS, Rutgers Business School ALL RIGHTS RESERVED

More information

IN this paper, we consider the capacity of sticky channels, a

IN this paper, we consider the capacity of sticky channels, a 72 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 1, JANUARY 2008 Capacity Bounds for Sticky Channels Michael Mitzenmacher, Member, IEEE Abstract The capacity of sticky channels, a subclass of insertion

More information

2 Completing the Hardness of approximation of Set Cover

2 Completing the Hardness of approximation of Set Cover CSE 533: The PCP Theorem and Hardness of Approximation (Autumn 2005) Lecture 15: Set Cover hardness and testing Long Codes Nov. 21, 2005 Lecturer: Venkat Guruswami Scribe: Atri Rudra 1 Recap We will first

More information

Equality of P-partition Generating Functions

Equality of P-partition Generating Functions Bucknell University Bucknell Digital Commons Honors Theses Student Theses 2011 Equality of P-partition Generating Functions Ryan Ward Bucknell University Follow this and additional works at: https://digitalcommons.bucknell.edu/honors_theses

More information

Space Complexity vs. Query Complexity

Space Complexity vs. Query Complexity Space Complexity vs. Query Complexity Oded Lachish Ilan Newman Asaf Shapira Abstract Combinatorial property testing deals with the following relaxation of decision problems: Given a fixed property and

More information

Independent Transversals in r-partite Graphs

Independent Transversals in r-partite Graphs Independent Transversals in r-partite Graphs Raphael Yuster Department of Mathematics Raymond and Beverly Sackler Faculty of Exact Sciences Tel Aviv University, Tel Aviv, Israel Abstract Let G(r, n) denote

More information

Independence numbers of locally sparse graphs and a Ramsey type problem

Independence numbers of locally sparse graphs and a Ramsey type problem Independence numbers of locally sparse graphs and a Ramsey type problem Noga Alon Abstract Let G = (V, E) be a graph on n vertices with average degree t 1 in which for every vertex v V the induced subgraph

More information

Building Infinite Processes from Finite-Dimensional Distributions

Building Infinite Processes from Finite-Dimensional Distributions Chapter 2 Building Infinite Processes from Finite-Dimensional Distributions Section 2.1 introduces the finite-dimensional distributions of a stochastic process, and shows how they determine its infinite-dimensional

More information

Every Monotone Graph Property is Testable

Every Monotone Graph Property is Testable Every Monotone Graph Property is Testable Noga Alon Asaf Shapira Abstract A graph property is called monotone if it is closed under removal of edges and vertices. Many monotone graph properties are some

More information

Lecture 5: Probabilistic tools and Applications II

Lecture 5: Probabilistic tools and Applications II T-79.7003: Graphs and Networks Fall 2013 Lecture 5: Probabilistic tools and Applications II Lecturer: Charalampos E. Tsourakakis Oct. 11, 2013 5.1 Overview In the first part of today s lecture we will

More information

Asymptotically optimal induced universal graphs

Asymptotically optimal induced universal graphs Asymptotically optimal induced universal graphs Noga Alon Abstract We prove that the minimum number of vertices of a graph that contains every graph on vertices as an induced subgraph is (1+o(1))2 ( 1)/2.

More information

With P. Berman and S. Raskhodnikova (STOC 14+). Grigory Yaroslavtsev Warren Center for Network and Data Sciences

With P. Berman and S. Raskhodnikova (STOC 14+). Grigory Yaroslavtsev Warren Center for Network and Data Sciences L p -Testing With P. Berman and S. Raskhodnikova (STOC 14+). Grigory Yaroslavtsev Warren Center for Network and Data Sciences http://grigory.us Property Testing [Goldreich, Goldwasser, Ron; Rubinfeld,

More information

A Characterization of the (natural) Graph Properties Testable with One-Sided Error

A Characterization of the (natural) Graph Properties Testable with One-Sided Error A Characterization of the (natural) Graph Properties Testable with One-Sided Error Noga Alon Asaf Shapira Abstract The problem of characterizing all the testable graph properties is considered by many

More information

Lecture 7: ɛ-biased and almost k-wise independent spaces

Lecture 7: ɛ-biased and almost k-wise independent spaces Lecture 7: ɛ-biased and almost k-wise independent spaces Topics in Complexity Theory and Pseudorandomness (pring 203) Rutgers University wastik Kopparty cribes: Ben Lund, Tim Naumovitz Today we will see

More information

Lower Bounds for Testing Bipartiteness in Dense Graphs

Lower Bounds for Testing Bipartiteness in Dense Graphs Lower Bounds for Testing Bipartiteness in Dense Graphs Andrej Bogdanov Luca Trevisan Abstract We consider the problem of testing bipartiteness in the adjacency matrix model. The best known algorithm, due

More information

Lecture 3 Small bias with respect to linear tests

Lecture 3 Small bias with respect to linear tests 03683170: Expanders, Pseudorandomness and Derandomization 3/04/16 Lecture 3 Small bias with respect to linear tests Amnon Ta-Shma and Dean Doron 1 The Fourier expansion 1.1 Over general domains Let G be

More information

Lecture 7: Passive Learning

Lecture 7: Passive Learning CS 880: Advanced Complexity Theory 2/8/2008 Lecture 7: Passive Learning Instructor: Dieter van Melkebeek Scribe: Tom Watson In the previous lectures, we studied harmonic analysis as a tool for analyzing

More information

Uniformly discrete forests with poor visibility

Uniformly discrete forests with poor visibility Uniformly discrete forests with poor visibility Noga Alon August 19, 2017 Abstract We prove that there is a set F in the plane so that the distance between any two points of F is at least 1, and for any

More information

Lecture 4. 1 Circuit Complexity. Notes on Complexity Theory: Fall 2005 Last updated: September, Jonathan Katz

Lecture 4. 1 Circuit Complexity. Notes on Complexity Theory: Fall 2005 Last updated: September, Jonathan Katz Notes on Complexity Theory: Fall 2005 Last updated: September, 2005 Jonathan Katz Lecture 4 1 Circuit Complexity Circuits are directed, acyclic graphs where nodes are called gates and edges are called

More information

1 Randomized Computation

1 Randomized Computation CS 6743 Lecture 17 1 Fall 2007 1 Randomized Computation Why is randomness useful? Imagine you have a stack of bank notes, with very few counterfeit ones. You want to choose a genuine bank note to pay at

More information

Tolerant Versus Intolerant Testing for Boolean Properties

Tolerant Versus Intolerant Testing for Boolean Properties Electronic Colloquium on Computational Complexity, Report No. 105 (2004) Tolerant Versus Intolerant Testing for Boolean Properties Eldar Fischer Lance Fortnow November 18, 2004 Abstract A property tester

More information

A Separation Theorem in Property Testing

A Separation Theorem in Property Testing A Separation Theorem in Property Testing Noga Alon Asaf Shapira Abstract Consider the following seemingly rhetorical question: Is it crucial for a property-tester to know the error parameter ɛ in advance?

More information

ON SENSITIVITY OF k-uniform HYPERGRAPH PROPERTIES

ON SENSITIVITY OF k-uniform HYPERGRAPH PROPERTIES ON SENSITIVITY OF k-uniform HYPERGRAPH PROPERTIES JOSHUA BIDERMAN, KEVIN CUDDY, ANG LI, MIN JAE SONG Abstract. In this paper we present a graph property with sensitivity Θ( n), where n = ( v 2) is the

More information

Space Complexity vs. Query Complexity

Space Complexity vs. Query Complexity Space Complexity vs. Query Complexity Oded Lachish 1, Ilan Newman 2, and Asaf Shapira 3 1 University of Haifa, Haifa, Israel, loded@cs.haifa.ac.il. 2 University of Haifa, Haifa, Israel, ilan@cs.haifa.ac.il.

More information

A Construction of Almost Steiner Systems

A Construction of Almost Steiner Systems A Construction of Almost Steiner Systems Asaf Ferber, 1 Rani Hod, 2 Michael Krivelevich, 1 and Benny Sudakov 3,4 1 School of Mathematical Sciences, Raymond and Beverly Sackler Faculty of Exact Sciences,

More information

Lecture 4: Codes based on Concatenation

Lecture 4: Codes based on Concatenation Lecture 4: Codes based on Concatenation Error-Correcting Codes (Spring 206) Rutgers University Swastik Kopparty Scribe: Aditya Potukuchi and Meng-Tsung Tsai Overview In the last lecture, we studied codes

More information

Testing Problems with Sub-Learning Sample Complexity

Testing Problems with Sub-Learning Sample Complexity Testing Problems with Sub-Learning Sample Complexity Michael Kearns AT&T Labs Research 180 Park Avenue Florham Park, NJ, 07932 mkearns@researchattcom Dana Ron Laboratory for Computer Science, MIT 545 Technology

More information

Nearly Tight Bounds for Testing Function Isomorphism

Nearly Tight Bounds for Testing Function Isomorphism Nearly Tight Bounds for Testing Function Isomorphism Noga Alon 1, Eric Blais 2, Sourav Chakraborty 3, David García-Soriano 4, and Arie Matsliah 5 1 Schools of Mathematics and Computer Science, Sackler

More information

Out-colourings of Digraphs

Out-colourings of Digraphs Out-colourings of Digraphs N. Alon J. Bang-Jensen S. Bessy July 13, 2017 Abstract We study vertex colourings of digraphs so that no out-neighbourhood is monochromatic and call such a colouring an out-colouring.

More information

1.1 P, NP, and NP-complete

1.1 P, NP, and NP-complete CSC5160: Combinatorial Optimization and Approximation Algorithms Topic: Introduction to NP-complete Problems Date: 11/01/2008 Lecturer: Lap Chi Lau Scribe: Jerry Jilin Le This lecture gives a general introduction

More information

Lecture 22: Quantum computational complexity

Lecture 22: Quantum computational complexity CPSC 519/619: Quantum Computation John Watrous, University of Calgary Lecture 22: Quantum computational complexity April 11, 2006 This will be the last lecture of the course I hope you have enjoyed the

More information

Probabilistic Proofs of Existence of Rare Events. Noga Alon

Probabilistic Proofs of Existence of Rare Events. Noga Alon Probabilistic Proofs of Existence of Rare Events Noga Alon Department of Mathematics Sackler Faculty of Exact Sciences Tel Aviv University Ramat-Aviv, Tel Aviv 69978 ISRAEL 1. The Local Lemma In a typical

More information

Testing Graph Isomorphism

Testing Graph Isomorphism Testing Graph Isomorphism Eldar Fischer Arie Matsliah Abstract Two graphs G and H on n vertices are ɛ-far from being isomorphic if at least ɛ ( n 2) edges must be added or removed from E(G) in order to

More information

Sensitivity, Block Sensitivity and Certificate Complexity of Boolean Functions (Master s Thesis)

Sensitivity, Block Sensitivity and Certificate Complexity of Boolean Functions (Master s Thesis) Sensitivity, Block Sensitivity and Certificate Complexity of Boolean Functions (Master s Thesis) Sourav Chakraborty Thesis Advisor: László Babai February, 2005 Abstract We discuss several complexity measures

More information

DR.RUPNATHJI( DR.RUPAK NATH )

DR.RUPNATHJI( DR.RUPAK NATH ) Contents 1 Sets 1 2 The Real Numbers 9 3 Sequences 29 4 Series 59 5 Functions 81 6 Power Series 105 7 The elementary functions 111 Chapter 1 Sets It is very convenient to introduce some notation and terminology

More information

Testing Properties of Boolean Functions

Testing Properties of Boolean Functions Testing Properties of Boolean Functions Eric Blais CMU-CS-1-101 January 01 School of Computer Science Carnegie Mellon University Pittsburgh, PA 1513 Thesis Committee: Ryan O Donnell, Chair Avrim Blum Venkatesan

More information

Avoider-Enforcer games played on edge disjoint hypergraphs

Avoider-Enforcer games played on edge disjoint hypergraphs Avoider-Enforcer games played on edge disjoint hypergraphs Asaf Ferber Michael Krivelevich Alon Naor July 8, 2013 Abstract We analyze Avoider-Enforcer games played on edge disjoint hypergraphs, providing

More information

The Algorithmic Aspects of the Regularity Lemma

The Algorithmic Aspects of the Regularity Lemma The Algorithmic Aspects of the Regularity Lemma N. Alon R. A. Duke H. Lefmann V. Rödl R. Yuster Abstract The Regularity Lemma of Szemerédi is a result that asserts that every graph can be partitioned in

More information

The Turán number of sparse spanning graphs

The Turán number of sparse spanning graphs The Turán number of sparse spanning graphs Noga Alon Raphael Yuster Abstract For a graph H, the extremal number ex(n, H) is the maximum number of edges in a graph of order n not containing a subgraph isomorphic

More information

18.5 Crossings and incidences

18.5 Crossings and incidences 18.5 Crossings and incidences 257 The celebrated theorem due to P. Turán (1941) states: if a graph G has n vertices and has no k-clique then it has at most (1 1/(k 1)) n 2 /2 edges (see Theorem 4.8). Its

More information

Properly colored Hamilton cycles in edge colored complete graphs

Properly colored Hamilton cycles in edge colored complete graphs Properly colored Hamilton cycles in edge colored complete graphs N. Alon G. Gutin Dedicated to the memory of Paul Erdős Abstract It is shown that for every ɛ > 0 and n > n 0 (ɛ), any complete graph K on

More information

Multi-coloring and Mycielski s construction

Multi-coloring and Mycielski s construction Multi-coloring and Mycielski s construction Tim Meagher Fall 2010 Abstract We consider a number of related results taken from two papers one by W. Lin [1], and the other D. C. Fisher[2]. These articles

More information

Asymptotically optimal induced universal graphs

Asymptotically optimal induced universal graphs Asymptotically optimal induced universal graphs Noga Alon Abstract We prove that the minimum number of vertices of a graph that contains every graph on vertices as an induced subgraph is (1 + o(1))2 (

More information

An introduction to basic information theory. Hampus Wessman

An introduction to basic information theory. Hampus Wessman An introduction to basic information theory Hampus Wessman Abstract We give a short and simple introduction to basic information theory, by stripping away all the non-essentials. Theoretical bounds on

More information

Not all counting problems are efficiently approximable. We open with a simple example.

Not all counting problems are efficiently approximable. We open with a simple example. Chapter 7 Inapproximability Not all counting problems are efficiently approximable. We open with a simple example. Fact 7.1. Unless RP = NP there can be no FPRAS for the number of Hamilton cycles in a

More information

5 + 9(10) + 3(100) + 0(1000) + 2(10000) =

5 + 9(10) + 3(100) + 0(1000) + 2(10000) = Chapter 5 Analyzing Algorithms So far we have been proving statements about databases, mathematics and arithmetic, or sequences of numbers. Though these types of statements are common in computer science,

More information

Handout 5. α a1 a n. }, where. xi if a i = 1 1 if a i = 0.

Handout 5. α a1 a n. }, where. xi if a i = 1 1 if a i = 0. Notes on Complexity Theory Last updated: October, 2005 Jonathan Katz Handout 5 1 An Improved Upper-Bound on Circuit Size Here we show the result promised in the previous lecture regarding an upper-bound

More information

The Lovász Local Lemma : A constructive proof

The Lovász Local Lemma : A constructive proof The Lovász Local Lemma : A constructive proof Andrew Li 19 May 2016 Abstract The Lovász Local Lemma is a tool used to non-constructively prove existence of combinatorial objects meeting a certain conditions.

More information

arxiv: v2 [math.co] 11 Mar 2008

arxiv: v2 [math.co] 11 Mar 2008 Testing properties of graphs and functions arxiv:0803.1248v2 [math.co] 11 Mar 2008 Contents László Lovász Institute of Mathematics, Eötvös Loránd University, Budapest and Balázs Szegedy Department of Mathematics,

More information

Settling the Query Complexity of Non-Adaptive Junta Testing

Settling the Query Complexity of Non-Adaptive Junta Testing Settling the Query Complexity of Non-Adaptive Junta Testing Erik Waingarten, Columbia University Based on joint work with Xi Chen (Columbia University) Rocco Servedio (Columbia University) Li-Yang Tan

More information

Broadcasting With Side Information

Broadcasting With Side Information Department of Electrical and Computer Engineering Texas A&M Noga Alon, Avinatan Hasidim, Eyal Lubetzky, Uri Stav, Amit Weinstein, FOCS2008 Outline I shall avoid rigorous math and terminologies and be more

More information

Lecture 8: Shannon s Noise Models

Lecture 8: Shannon s Noise Models Error Correcting Codes: Combinatorics, Algorithms and Applications (Fall 2007) Lecture 8: Shannon s Noise Models September 14, 2007 Lecturer: Atri Rudra Scribe: Sandipan Kundu& Atri Rudra Till now we have

More information

Aditya Bhaskara CS 5968/6968, Lecture 1: Introduction and Review 12 January 2016

Aditya Bhaskara CS 5968/6968, Lecture 1: Introduction and Review 12 January 2016 Lecture 1: Introduction and Review We begin with a short introduction to the course, and logistics. We then survey some basics about approximation algorithms and probability. We also introduce some of

More information

NP Completeness and Approximation Algorithms

NP Completeness and Approximation Algorithms Chapter 10 NP Completeness and Approximation Algorithms Let C() be a class of problems defined by some property. We are interested in characterizing the hardest problems in the class, so that if we can

More information

Notes for Lecture 3... x 4

Notes for Lecture 3... x 4 Stanford University CS254: Computational Complexity Notes 3 Luca Trevisan January 18, 2012 Notes for Lecture 3 In this lecture we introduce the computational model of boolean circuits and prove that polynomial

More information

Fourier analysis of boolean functions in quantum computation

Fourier analysis of boolean functions in quantum computation Fourier analysis of boolean functions in quantum computation Ashley Montanaro Centre for Quantum Information and Foundations, Department of Applied Mathematics and Theoretical Physics, University of Cambridge

More information

Lecture 24: April 12

Lecture 24: April 12 CS271 Randomness & Computation Spring 2018 Instructor: Alistair Sinclair Lecture 24: April 12 Disclaimer: These notes have not been subjected to the usual scrutiny accorded to formal publications. They

More information

Is submodularity testable?

Is submodularity testable? Is submodularity testable? C. Seshadhri IBM Almaden Research Center 650 Harry Road, San Jose, CA 95120 csesha@us.ibm.com Jan Vondrák IBM Almaden Research Center 650 Harry Road, San Jose, CA 95120 jvondrak@us.ibm.com

More information

Testing Hereditary Properties of Ordered Graphs and Matrices

Testing Hereditary Properties of Ordered Graphs and Matrices Testing Hereditary Properties of Ordered Graphs and Matrices Noga Alon, Omri Ben-Eliezer, Eldar Fischer April 6, 2017 Abstract We consider properties of edge-colored vertex-ordered graphs, i.e., graphs

More information

Isomorphisms between pattern classes

Isomorphisms between pattern classes Journal of Combinatorics olume 0, Number 0, 1 8, 0000 Isomorphisms between pattern classes M. H. Albert, M. D. Atkinson and Anders Claesson Isomorphisms φ : A B between pattern classes are considered.

More information

Lecture 21: P vs BPP 2

Lecture 21: P vs BPP 2 Advanced Complexity Theory Spring 206 Prof. Dana Moshkovitz Lecture 2: P vs BPP 2 Overview In the previous lecture, we began our discussion of pseudorandomness. We presented the Blum- Micali definition

More information

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University Chapter 4 Data Transmission and Channel Capacity Po-Ning Chen, Professor Department of Communications Engineering National Chiao Tung University Hsin Chu, Taiwan 30050, R.O.C. Principle of Data Transmission

More information

Choosability and fractional chromatic numbers

Choosability and fractional chromatic numbers Choosability and fractional chromatic numbers Noga Alon Zs. Tuza M. Voigt This copy was printed on October 11, 1995 Abstract A graph G is (a, b)-choosable if for any assignment of a list of a colors to

More information

A Generalized Turán Problem and its Applications

A Generalized Turán Problem and its Applications A Generalized Turán Problem and its Applications Lior Gishboliner Asaf Shapira Abstract The investigation of conditions guaranteeing the appearance of cycles of certain lengths is one of the most well-studied

More information

Testing of bipartite graph properties

Testing of bipartite graph properties Testing of bipartite graph properties Noga Alon Eldar Fischer Ilan Newman June 15, 2005 Abstract Alon et. al. [3], showed that every property that is characterized by a finite collection of forbidden induced

More information

Katarzyna Mieczkowska

Katarzyna Mieczkowska Katarzyna Mieczkowska Uniwersytet A. Mickiewicza w Poznaniu Erdős conjecture on matchings in hypergraphs Praca semestralna nr 1 (semestr letni 010/11 Opiekun pracy: Tomasz Łuczak ERDŐS CONJECTURE ON MATCHINGS

More information

Tutorial: Locally decodable codes. UT Austin

Tutorial: Locally decodable codes. UT Austin Tutorial: Locally decodable codes Anna Gál UT Austin Locally decodable codes Error correcting codes with extra property: Recover (any) one message bit, by reading only a small number of codeword bits.

More information

Learning convex bodies is hard

Learning convex bodies is hard Learning convex bodies is hard Navin Goyal Microsoft Research India navingo@microsoft.com Luis Rademacher Georgia Tech lrademac@cc.gatech.edu Abstract We show that learning a convex body in R d, given

More information

A Combinatorial Characterization of the Testable Graph Properties: It s All About Regularity

A Combinatorial Characterization of the Testable Graph Properties: It s All About Regularity A Combinatorial Characterization of the Testable Graph Properties: It s All About Regularity ABSTRACT Noga Alon Tel-Aviv University and IAS nogaa@tau.ac.il Ilan Newman Haifa University ilan@cs.haifa.ac.il

More information

An exponential separation between quantum and classical one-way communication complexity

An exponential separation between quantum and classical one-way communication complexity An exponential separation between quantum and classical one-way communication complexity Ashley Montanaro Centre for Quantum Information and Foundations, Department of Applied Mathematics and Theoretical

More information

MATH2206 Prob Stat/20.Jan Weekly Review 1-2

MATH2206 Prob Stat/20.Jan Weekly Review 1-2 MATH2206 Prob Stat/20.Jan.2017 Weekly Review 1-2 This week I explained the idea behind the formula of the well-known statistic standard deviation so that it is clear now why it is a measure of dispersion

More information

Non-linear index coding outperforming the linear optimum

Non-linear index coding outperforming the linear optimum Non-linear index coding outperforming the linear optimum Eyal Lubetzky Uri Stav Abstract The following source coding problem was introduced by Birk and Kol: a sender holds a word x {0, 1} n, and wishes

More information

Answering Many Queries with Differential Privacy

Answering Many Queries with Differential Privacy 6.889 New Developments in Cryptography May 6, 2011 Answering Many Queries with Differential Privacy Instructors: Shafi Goldwasser, Yael Kalai, Leo Reyzin, Boaz Barak, and Salil Vadhan Lecturer: Jonathan

More information

Spanning and Independence Properties of Finite Frames

Spanning and Independence Properties of Finite Frames Chapter 1 Spanning and Independence Properties of Finite Frames Peter G. Casazza and Darrin Speegle Abstract The fundamental notion of frame theory is redundancy. It is this property which makes frames

More information

6.842 Randomness and Computation April 2, Lecture 14

6.842 Randomness and Computation April 2, Lecture 14 6.84 Randomness and Computation April, 0 Lecture 4 Lecturer: Ronitt Rubinfeld Scribe: Aaron Sidford Review In the last class we saw an algorithm to learn a function where very little of the Fourier coeffecient

More information

Error Correcting Codes: Combinatorics, Algorithms and Applications Spring Homework Due Monday March 23, 2009 in class

Error Correcting Codes: Combinatorics, Algorithms and Applications Spring Homework Due Monday March 23, 2009 in class Error Correcting Codes: Combinatorics, Algorithms and Applications Spring 2009 Homework Due Monday March 23, 2009 in class You can collaborate in groups of up to 3. However, the write-ups must be done

More information

List Decoding of Reed Solomon Codes

List Decoding of Reed Solomon Codes List Decoding of Reed Solomon Codes p. 1/30 List Decoding of Reed Solomon Codes Madhu Sudan MIT CSAIL Background: Reliable Transmission of Information List Decoding of Reed Solomon Codes p. 2/30 List Decoding

More information

Information Complexity and Applications. Mark Braverman Princeton University and IAS FoCM 17 July 17, 2017

Information Complexity and Applications. Mark Braverman Princeton University and IAS FoCM 17 July 17, 2017 Information Complexity and Applications Mark Braverman Princeton University and IAS FoCM 17 July 17, 2017 Coding vs complexity: a tale of two theories Coding Goal: data transmission Different channels

More information