On the Randomness Complexity of. Efficient Sampling. Bella Dubrov

Size: px
Start display at page:

Download "On the Randomness Complexity of. Efficient Sampling. Bella Dubrov"

Transcription

1 On the Randomness Complexity of Efficient Sampling Bella Dubrov

2

3 On the Randomness Complexity of Efficient Sampling Research Thesis Submitted in Partial Fulfillment of the Requirements for the Degree of Master of Science in Computer Science Bella Dubrov Submitted to the Senate of the Technion Israel Institute of Technology ADAR 5766 HAIFA MARCH 2006

4

5 The Research Thesis Was Done Under The Supervision of Dr. Yuval Ishai in the Department of Computer Science I wish to express my deep gratitude to my advisor, Yuval Ishai, for his wise guidance, constant encouragement and many inspiring discussions. I would also like to thank Eyal Kushilevitz and Ronen Shaltiel for useful comments and suggestions regarding this thesis. The generous financial help of the Technion is gratefully acknowledged

6

7 Dedicated to my mother, grandfather and Vadim

8

9 Contents Abstract 1 1 Introduction Our Contribution Preliminaries Boolean Circuits Cryptography and Pseudorandom Generators The Nisan-Wigderson Pseudorandom Generator Function Compression Pseudorandom Generators Fooling Non-Boolean Distinguishers Cryptographic nb-prgs Nisan-Wigderson Style nb-prgs nb-prgs for Constant Depth Circuits Compression Lower Bounds for Parity Exact Compression of Parity Average-Case Compression of Parity Proof of Part 1 of Theorem Proof of Part 2 of Theorem A Win-Win Result 31 6 Applications Probabilistic Functions and Random Sampling Relation with the P romise-p vs. P romise-bpp Question Matching the Entropy Bound Cryptographic Applications Conclusions and Open Problems 53 References 55 Hebrew Abstract i

10

11 List of Tables 3.1 Summary of nb-prgs parameter settings Summary of randomness parameters for samplers

12

13 Abstract We consider the following question: Can every efficiently samplable distribution be efficiently sampled, up to a small statistical distance, using roughly as much randomness as the length of its output? Towards a study of this question we generalize the current theory of pseudorandomness and consider pseudorandom generators that fool non-boolean distinguishers (nb-prgs). We show a link between nb-prgs and a notion of function compression, introduced by Harnik and Naor [18]. (A compression algorithm for f should efficiently compress an input x in a way that will preserve the information needed to compute f(x).) By constructing nb-prgs, we answer the above question affirmatively under the following types of assumptions: Cryptographic incompressibility assumptions (that are implied by, and seem weaker than, exponential cryptographic assumptions). Nisan-Wigderson style (average-case) incompressibility assumptions for polynomialtime computable functions. No assumptions are needed for answering our question affirmatively in the case of constant-depth samplers. To complement the above, we extend an idea from [18] and establish the following winwin situation. If the answer to our main question is no, then it is possible to construct a (weak variant of) collision-resistant hash function from any one-way permutation. The latter would be considered a surprising result, as a black-box construction of this type was ruled out by Simon [38]. Finally, we present an application of nb-prgs to information theoretic cryptography. Specifically, under any of the above assumptions, efficient protocols for informationtheoretic secure multiparty computation never need to use (much) more randomness than communication. An extended abstract of our results will appear in [9]. 1

14 2

15 Notation and Abbreviations U n Uniform distribution on {0, 1} n U S Uniform distribution on the set S x R X The choice of x according to the distribution X H(X) The Shannon entropy of X SD(X, Y ) The statistical distance between X and Y PPTM Probabilistic polynomial-time Turing machine u, v The inner product of the vectors u and v modulo 2 Concatenation f : n m(n) The function f outputs a string of length m(n) on inputs of length n f k (x) f(f(f...(x))) (k times) Size(s(n)) The class of circuits of size s(n) Size(s(n)) Depth(d(n)) The class of circuits of size s(n) and depth d(n) s-dnf A DNF with terms of length s s-cnf A CNF with clauses of length s 3

16 4

17 Chapter 1 Introduction In their 1976 paper, Knuth and Yao [30] consider the following problem. Suppose we wish to sample from some distribution R on {0, 1} m. How many random bits are needed in order to do this? Knuth and Yao showed that there exists a (possibly inefficient, or even nonterminating) algorithm that samples R using H(R) + O(1) random bits on average, where H( ) denotes Shannon entropy. We investigate the computational analog of this question, namely we deal with the case where R is samplable by some efficient algorithm D. Our goal is to reduce the amount of randomness that D uses as much as possible, when we are willing to settle for sampling from a distribution which is statistically 1 close to R. The question we ask is the following: Can every efficiently samplable distribution be efficiently sampled, up to a small statistical distance, using roughly as much randomness as the length of its output? (Note that in case that the Shannon entropy is efficiently computable, or is given as advice, one could hope to match the entropy bound. We will deal with this more refined question later.) Building randomness-efficient (or completely derandomized) algorithms for various purposes has been the subject of extensive research, e.g. [34, 35, 22, 23, 31, 29, 7, 40, 26, 21, 39, 27]. Considerable effort has been devoted to derandomizing algorithms that compute deterministic boolean functions, with P = BPP being a major open problem in the area. A more general problem is the reduction of the amount of randomness used by algorithms that compute general (not necessarily boolean) probabilistic functions. 2 A probabilistic function f is a random process that on input x outputs a sample from some distribution f(x). Therefore sampling algorithms comprise a special case of probabilistic functions (where the input is over a unary alphabet or, equivalently, the output distribution depends only on the length 1 Note that computational closeness can be achieved using standard pseudorandom generators. While this suffices for most real-life applications of sampling, there are contexts in which one actually needs the strict notion of statistical closeness considered here. Such an application in cryptography is discussed in Section We note that in the case of probabilistic functions with logarithmic output length this problem is equivalent to the P romise-p vs. P romise-bpp question. See Section for more details. 5

18 of the input). In the following, whenever we refer to sampling algorithms, the discussion can be generalized to arbitrary probabilistic functions. Pseudorandomness. A major tool in the area of derandomization is the notion of a pseudorandom generator. A pseudorandom generator (PRG) is an efficient deterministic algorithm that stretches a short random string (called the seed) into a long pseudorandom string, i.e., a string that looks random to computationally bounded distinguishers. In the classical setting the distinguisher gets some input string and outputs a single bit. The distinguisher is said to be fooled by the PRG, if the probability that it outputs 1 on a random string is close to the probability that it outputs 1 on a pseudorandom string. Except for very limited distinguisher classes (e.g. constant-depth circuits), the constructions of pseudorandom generators rely on some hardness assumptions. The first approach, initiated by Blum-Micali and Yao, is to use cryptographic assumptions, such as the existence of one-way permutations (OWPs) [5, 42] and one-way functions [16]. The resulting generators have polynomial stretch (i.e., the seed length and the output length are polynomially related), are computable in polynomial time and, provided the assumption holds, fool any polynomial-time distinguisher. Relaxing the requirements, Nisan and Wigderson [35] constructed pseudorandom generators relying on weaker assumptions, namely (non-unifrom) average-case hardness assumptions, which in turn can be reduced to corresponding worstcase assumptions [1, 23, 40]. The resulting generators fool distinguishers with fixed circuit size and are allowed to be computable in exponential time in their seed length. This allows to reduce the seed length to O(log n), where n is the output length. It should be noted that these PRGs require more time to compute than the size of the circuit they try to fool. 1.1 Our Contribution In order to deal with reducing the amount of randomness required by probabilistic functions and samplers, we present a natural generalization of the classical notion of PRGs to pseudorandom generators that fool non-boolean distinguishers (nb-prgs). In our setting the distinguisher gets an input string of length n and outputs some string of length m(n). We say that the nb-prg fools the distinguisher, if the distribution of the distinguisher s output on a random string is statistically close to the distribution of its output on a pseudorandom string. The question of constructing pseudorandom generators that fool non-boolean distinguishers was posed in [24]. Typical parameters. The typical parameters of nb-prgs that are useful for our applications are the following. First note that every nb-prg fooling distinguishers that output m bits must have seed length at least m, since otherwise the distinguisher can just output the first m bits of its input. Suppose that we want to reduce the amount of randomness used by a sampler that uses n random bits and outputs m(n) = n γ random bits for some constant 0 < γ < 1. The sampler runs in poly(n) time and we want the new sampler (with the reduced randomness) to also be efficient. We would therefore like to have an nb-prg with the shortest seed possible (i.e., of length m) and with polynomial stretch, which is computable in poly(n) time. In the sequel we call this parameter setting the polynomial setting. Note that this is different from the case of standard PRGs for derandomizing BPP, 6

19 where the PRG can be computable in exponential time in its seed length. The reason for this is that in the standard setting one needs to iterate over all the possible seeds anyway in order to achieve full derandomization. The seed length in the standard setting would ideally be O(log n). In our setting, however, we can not hope for full derandomization, since the processes that we deal with are inherently probabilistic. The seed length in our setting would typically be polynomially related to the output length of the generator. It is obvious that in the general case, similarly to standard PRGs, the construction of nb-prgs should rely on hardness assumptions. But what type of assumptions can be useful for this task? Observe that a standard PRG fooling circuits of size s + 2 m is also an nb-prg fooling circuits of size s that output m bits. (This is so since a distinguisher that outputs m bits can be converted into a boolean distinguisher at the expense of 2 m increase in its size.) It follows that good nb-prgs can be constructed based on exponential strength cryptographic assumptions, and in particular from the existence of a OWP with exponential hardness (since such OWPs imply the existence of PRGs with exponential hardness). In contrast, in the Nisan-Wigderson setting, a PRG fooling circuits of size 2 m will be computable in time greater than 2 m. Thus, under such assumptions we do not get efficient nb-prgs for the polynomial parameter setting. We would like to construct nb-prgs based on weaker assumptions than exponential strength OWPs. Our results in this direction are outlined below. Function compression. We show that the notion of nb-prgs is closely connected with the notion of function compression 3 introduced by Harnik and Naor [18]. Consider the following setting. A bounded player, called the compressor, wishes to compute some function f on an input x of length n. Because of its limited computing power, the compressor can not compute f(x) directly. Instead, the compressor has a connection with a computationally unbounded player called the solver, that is willing to compute f(x). Moreover, the compressor is only allowed to communicate m(n) < n bits to the solver. Therefore the compressor needs to somehow compress x to m(n) bits in a way that will preserve the information needed to compute f(x). We say that f can be compressed to m(n) bits by some algorithm C, if on inputs of size n the algorithm C outputs m(n) bits and there exists an unbounded solver S such that S(C(x)) = f(x) for all x. This (worst-case) notion of compression can be naturally generalized to average-case compression, where we measure the fraction of inputs x on which S(C(x)) = f(x). Constructing nb-prgs. We show that the assumption that function compression is hard has some useful consequences in our context. More specifically, we demonstrate that nb-prgs can be constructed based on (average-case) function compression hardness assumptions. 4 In particular, the cryptographic and the Nisan-Wigderson constructions both give rise to nb-prgs if, instead of standard hardness assumptions, compression hardness assumptions are used. We construct nb-prgs for the polynomial parameter setting based on the following assumptions. In the cryptographic setting we construct nb-prgs based on 3 This new notion is not to be confused with other notions of language compression that appear in the literature, e.g. [11]. 4 Note that the existence of an nb-prg fooling distinguishers that output m bits implies the existence of a function in N P that is hard to compress to m bits (the function equals 1 on the set of images of the nb-prg.) 7

20 polynomial-strength cryptographic incompressibility assumptions, namely that there exists a (polynomial-strength) OWP with a hard-core bit that is hard to compress. We note that, using an exact complexity analysis of the hard-core bit construction of Goldreich and Levin [13], this assumption is implied by an exponentially strong OWP. We can also base our nb-prgs on Nisan-Wigderson style incompressibility assumptions, namely on the existence of a function in P that is hard to compress on average to, say, n/2 bits by circuits of some fixed polynomial size. For instance, assuming the existence of a function in P that can not be compressed to n/2 bits with advantage 1/n 8 by circuits of size O(n 8 ) we obtain nb-prgs that fool (up to an error ɛ = 1/n) distinguishers of linear size that output m = n 1/4 bits, where the seed length is l = O(m 2 ). Note that, for the reasons discussed above, we need to base our constructions on hard functions in P (instead of hard functions in E that are used for constructing standard PRGs). In the classical setting, the average-case assumptions can be relaxed to worstcase assumptions by using worst-case to average-case hardness amplification techniques [1, 23, 40]. Since no hardness amplification techniques for functions in P are known, we can not relax our assumptions to worst-case. Similarly, PRG constructions based on a polynomial encoding of the hard function, such as [37, 41], can not be used in our setting. These constructions require the computation of a low-degree extension of the hard boolean function, that is, a low-degree polynomial over a finite field that agrees with the boolean function on its domain. The problem is that the low degree extension of a function in P is generally #P-hard to compute, so the resulting generator will not be efficient. A win-win result. We also reach some interesting conclusions in case our hardness assumptions do not hold. Harnik and Naor [18] suggest applications of function compression to cryptography. In particular, they show that if SAT can be efficiently compressed (with certain parameters) then a collision-resistant hash function (CRHF) can be built from any one-way function. Our results can be viewed as complementary to theirs: they show consequences of the existence of good compression algorithms, whereas we show consequences of the non-existence of such algorithms. However, there is a considerable gap between the easiness results exploited by [18] and the hardness results we require. First, [18] requires good compression in the worst-case whereas we rely on incompressibility in the average-case. More importantly, even if SAT is incompressible on average, this does not imply a positive answer to our main question, since the resulting nb-prgs will typically be computationally inefficient. Thus, to establish a tighter win-win situation we need to rely on a different construction of CRHFs whose failure would imply the type of negative compression results that is useful for our purpose. Using this approach, we show the following result. Suppose OWPs exist. 5 Then, either the answer to our main question is yes or there exists a distributional collision resistant hash function (d-crhf). Distributional collision-resistant hash functions are a weaker variant of CRHFs that we define. Instead of requiring that finding an arbitrary collision is hard, we require that finding a random collision is hard. A different way of interpreting the above result is that if the answer to our main question is no then OWP implies d-crhf. The latter implication would be considered a surprising result since 5 For this result we only require the existence of standard (polynomial strength) OWPs (as opposed to exponential strength OWPs that imply a positive answer to our question). 8

21 Simon [38] shows an oracle relative to which OWPs exist but CRHFs do not, hence ruling out a black-box construction of CRHFs from OWPs. We note that relative to the same oracle used in [38] d-crhfs also do not exist, thus no black-box construction of d-crhfs from OWPs is possible. Unconditional results for constant-depth samplers. We prove unconditional lower bounds on the size of constant depth circuits that compress the parity function. As in the standard setting, this gives rise to unconditional nb-prgs that fool constant depth circuits. Our lower bounds generalize the lower bounds for computing parity obtained by Håstad [15]. Using the results of [15] directly, the compression bounds that can be obtained are only for m = o(n 1/(d+2) ), where d is the depth of the circuit. We extend these results to obtain nearly tight bounds that apply to every value of m of the form m = n δ, for any constant 0 < δ < 1. Thus we get (unconditionally) efficient nb-prgs for constant depth samplers. Our compression lower bounds show that in the specific context of computing parity by constant depth circuits, the relaxation of standard computation to compression does not give much more computing power. Application to cryptography. As was already mentioned, the main application that motivated our notion of nb-prgs is the problem of reducing the amount of randomness used by sampling algorithms. Our generators also have a cryptographic application. Specifically, they can be used to reduce the amount of randomness in protocols for information-theoretic secure multiparty computation. In this problem a set of players wish to compute a function of their inputs, while maintaining the (information-theoretic) privacy of their data against a computationally unbounded adversary. There has been a significant body of work on characterizing the amount of randomness used by such protocols, both for specific and general tasks, e.g. [6, 32, 31, 7, 10]. We show that using nb-prgs (and under the corresponding assumptions) efficient protocols for information-theoretic secure multiparty computation never need to use (much) more randomness than communication. (In fact, it suffices to use roughly as much randomness as the amount of communication viewed by an adversarial coalition.) Matching the entropy bound. Finally, we consider the more refined question of reducing the amount of randomness to match the entropy of the sampled distribution. Recall that Knuth and Yao [30] showed that this can be done non-explicitly. We show an explicit version of this result. Specifically, under exponential-strength cryptographic assumptions, or when the probability function of the sampled distribution can be computed in polynomial time, the amount of randomness needed to efficiently sample R can be reduced to O(H(R)). Related Work. Several known PRGs for space bounded machines (such as [34] and [36]) are in fact also nb-prgs in our sense. (When constructing the PRG, the output of the distinguisher should be considered part of the space that it uses.) This is so since when the distinguisher runs with a pseudorandom input produced by such a PRG, its final state distribution is statistically close to its final state distribution on a random input. This fact was noted, for example, by Nisan and Zuckerman [36], who also point out that their PRG can be used to reduce the amount of randomness in several types of probabilistic algorithms, such as walks on Markov chains. The seed length of such a PRG will be close to the space 9

22 the distinguisher uses. Therefore using these PRGs one can only reduce randomness to the output length in efficient sampling algorithms which do not use more space than their output length. In contrast, we consider the general case of efficient sampling algorithms. Organization. The remainder of the thesis is organized as follows. We start with some preliminaries in Chapter 2. In Chapter 3 we introduce the notion of nb-prgs and show that under compression hardness assumptions both the cryptographic construction and the Nisan-Wigderson construction give rise to nb-prgs. In Chapter 4 we prove lower bounds on the size of constant depth circuits that compress parity exactly and on average. In Chapter 5 we establish the win-win result, namely we show that if OWPs exist, then either the answer to our question is yes or d-crhfs exist. Chapter 6 presents some applications of nb-prgs. The application to random sampling is described in Section 6.1. Section explains the connection with the P romise-p vs. P romise-bpp question and Section discusses the possibility of matching the entropy bound. The application to secure computation protocols is described in Section 6.2. Finally, Chapter 7 discusses the conclusions and open problems. 10

23 Chapter 2 Preliminaries We refer the reader to the Notation and Abbreviations section on page 3. We will also need the following definitions and basic facts. Definition 2.1 Let X be a random variable on {0, 1} m. The Shannon entropy of X is defined as H(X) = ( ) 1 P r[x = x] log. P r[x = x] x {0,1} m Definition 2.2 For random variables X, Y on {0, 1} m the statistical distance of X and Y is defined as SD(X, Y ) = 1 P r[x = z] P r[y = z]. 2 z {0,1} m Fact 2.1 For any random variables X, Y on {0, 1} m SD(X, Y ) = max P r[x T ] P r[y T ]. T {0,1} m Fact 2.2 For any random variables X, Y, Z if SD(X, Y ) α and SD(Y, Z) β then SD(X, Z) α + β. Definition 2.3 Let m, n be integers such that m n. Let h = {h s }, h s : {0, 1} n {0, 1} m, be a family of functions. We say that h is a pairwise independent hash function family if for every x 1, x 2 {0, 1} n such that x 1 x 2, for every y 1, y 2 {0, 1} m P r s [h s (x 1 ) = y 1 h s (x 2 ) = y 2 ] = 1 2 2m. We note that polynomial-time computable pairwise independent hash function families can be easily constructed for every n and m n. 11

24 2.1 Boolean Circuits A boolean circuit consists of NOT, AND and OR gates. Unless stated otherwise, we discuss circuits of unbounded fanin (i.e., each gate can have any number of inputs). For any boolean circuit we define its size as the number of gates in the circuit excluding NOT gates and its depth as the length (in gates, excluding NOT gates) of the longest path from input to output. We call the input level of the circuit the bottom of the circuit. A circuit family C = {C n } n N is a collection of circuits such that for every n the circuit C n has n boolean inputs. For functions s : N N, d : N N we denote by Size(s(n)) the class of (non-uniform) circuit families C = {C n } such that for every n the circuit C n is of size at most s(n) and we similarly define Size(s(n)) Depth(d(n)). A literal is a boolean variable or its negation. A term is a conjunction (AND) of literals and a clause is a disjunction (OR) of literals. A DNF is OR of terms, a CNF is AND of clauses. An s-dnf is a DNF where all the terms are of length at most s. An s-cnf is a CNF where all the clauses are of length at most s. 2.2 Cryptography and Pseudorandom Generators The following are basic cryptographic definitions and results, see [12] for more details. In [12] the adversary is usually a PPTM machine. We use generalized definitions of the standard notions, in which the adversary is in some arbitrary complexity class K. Definition 2.4 Let K be a complexity class. A length-preserving permutation f : {0, 1} {0, 1} is called a K-one-way permutation (K-OWP for short) if it is computable in polynomial time and for every algorithm A K, for every polynomial p( ), for sufficiently large n s P r[a(f(x)) = x] < 1 p(n), where the probability is over x chosen according to U n and the coin tosses of A. Definition 2.5 Let K be a complexity class and let f : {0, 1} {0, 1} be a function. A predicate b : {0, 1} {0, 1} is called a K-hard-core bit of f if it is computable in polynomial time and for every algorithm A K, for every polynomial p( ), for sufficiently large n s P r[a(f(x)) = b(x)] < p(n), where the probability is over x chosen according to U n and the coin tosses of A. Definition 2.6 Let K be a complexity class and let ɛ : N R be a function such that for all n we have 0 ɛ(n) < 1. Let l : N N be a function such that l(n) < n for all n. A function G : l(n) n is called an (l, ɛ, K)-pseudorandom generator ((l, ɛ, K)-PRG for short), if for every distinguisher A K, for sufficiently large n s P r[a(g(u l )) = 1] P r[a(u n ) = 1] < ɛ(n). 12

25 G is called an (l, K)-cryptographic pseudorandom generator ((l, K)-crypto-PRG for short) if for every polynomial p( ) G is an (l, 1 p(n), K)-PRG. The proofs of the following theorems 1 are implicit in the works of Goldreich and Levin [13] and Blum, Micali and Yao [5], [42]. See also [12], chapters 2 and 3. Theorem 2.1 (An exact version of Theorem from [12]) Let 0 < γ 1. Let f be a Size(2 c 1n γ )-OWP, for some constant c 1 > 0. Define g(x, r) = (f(x), r) and b(x, r) = x, r. Then b is a Size(2 c 2n γ )-hard-core bit of g, for some constant c 2 > 0. Theorem 2.2 (An exact version of Proposition from [12]) Let 0 < γ δ < 1 be constants. For every constant c 1 > 0, if there exists a Size(2 c 1n γ/δ )-OWP, then there exists a poly(n)-time computable (n δ, Size(2 c 2n γ )) -crypto-prg, for some constant c 2 > The Nisan-Wigderson Pseudorandom Generator We now present basic definitions and results from the work of Nisan and Wigderson [35]. Definition 2.7 A collection of sets {S 1,..., S n }, S i [l], is called an (s, k)-design if S i = k for all i and S i S j s for all i j. An n l 0-1 matrix is called an (s, k)-design if the collection of its n rows, interpreted as subsets of [l] is an (s, k)-design. Lemma 2.1 For all integers n and k, such that log n k n, there exists an n l boolean matrix which is a (log n, k)-design, where l = O(k 2 ). The matrix can be computed in polynomial time in n. If k = O(log n), there exists such a matrix with l = O(log n) that is computable in polynomial time in n. Definition 2.8 Let l : N N be a 1-1 function. Let A = {A n } be a collection of 0-1 matrices such that A n is an n l(n) matrix. For x {0, 1} l(n) the matrix A n defines n subsets of the bits of x. Let f be a boolean function. We denote by f A the transformation such that for every n, for every x of length l(n) the output of f A (x) is the concatenation of n applications of the function f to the subsets of the bits of x defined by A n. The following theorem from [35] shows how to convert a hard boolean function into a pseudorandom generator. Theorem 2.3 Let k(n) < l(n) < n, let f be a boolean function. Suppose that for sufficiently large n s the function f on inputs of length k(n) can not be computed by circuits of size n 2 for n 2 fraction of the inputs. For every n let A n be a boolean n l(n) matrix which is a (log n, k(n))-design and let A = {A n }. Then G given by G = f A is an (l, 1 n, Size(n))-PRG. 1 The theorems also hold in the uniform setting, however we need the non-uniform versions for our purposes. 13

26 2.4 Function Compression The general notion of function compression was very recently introduced by Harnik and Naor [18], who mostly consider it in the context of NP relations. We use the following variant of this notion. Definition 2.9 Let m : N N be a function, such that m(n) < n for all n. Let C, S be complexity classes. We say that a boolean function f : {0, 1} {0, 1} has compression complexity (m, C, S) if there exist two algorithms: a compressor C : n m(n) and a solver S : {0, 1} {0, 1}, such that C C, S S and S(C(x)) = f(x) for all x. We will generally be interested in the case where the solver is unbounded. In this case we say that f can be m-compressed by algorithms in C. We will sometimes be interested in average-case function compression, that is, compression which is correct only for some fraction of the inputs. Definition 2.10 We say that a compressing algorithm C compresses f on average with probability p, if there exists a solver S such that for every n we have P r[s(c(x)) = f(x)] p, where the probability is on the uniform choice of x and the coin tosses of C (if C is probabilistic). We define the advantage of C as p 1 2. We note that in the case of average-case compression by circuits, we can assume w.l.o.g. that the compressing circuit is deterministic. We use the following notation. Notation 2.1 Let C : {0, 1} n {0, 1} m be a compressing circuit for a boolean function f : {0, 1} n {0, 1} and S : {0, 1} m {0, 1} be a solver. We define the following. Υ f S (C) = the number of inputs x {0, 1}n for which S(C(x)) = f(x), Υ f (C) = max S {Υ f S (C)}, α f (C) = Υf (C) 2, n f (C) = α f (C) 1 2. For y {0, 1} m(n) we define: zero f C (y) = {x {0, 1}n : f(x) = 0 and C(x) = y}, one f C (y) = {x {0, 1}n : f(x) = 1 and C(x) = y}. In order to get some intuition about function compression we now present a couple of basic observations. Observation 2.1 Most boolean functions are not compressible to n 1 bits by circuits of size 2 o(n). Proof: By a counting argument. Note that if f can be compressed to m bits by circuits of size s, then it can be computed by circuits of size s + 2 m. Thus we get the following. Observation 2.2 If a boolean function is not computable by circuits of size s + 2 m, then it can not be compressed to m bits by circuits of size s. 14

27 Chapter 3 Pseudorandom Generators Fooling Non-Boolean Distinguishers We are now ready to present our notion of pseudorandom generators fooling non-boolean distinguishers. Definition 3.1 Let K be a complexity class, let 0 ɛ(n) < 1 and m(n) l(n) < n for all n. A function G : l(n) n is called a pseudorandom generator fooling non-boolean distinguishers with parameters (l(n), m(n), ɛ(n), K) ( (l, m, ɛ, K)-nb-PRG for short), if for every D K, such that D : n m(n), for sufficiently large n s we have SD(D(G(U l )), D(U n )) < ɛ(n). (If D is probabilistic, the probability space includes also its coin tosses.) G is called an (l, m)-crypto-nb-prg if for every polynomial p( ) the function G is an 1 (l, m, p(n), K)-nb-PRG, where K is the class of PPTMs. Note that for m = 1 we get the classical PRG definition. Also note that m(n) l(n) is needed, since otherwise the distinguisher could output the first m bits and get an advantage ɛ 1 2. The following parameters can be obtained non-constructively. Proposition 3.1 For every n, m, s and ɛ there exists a function G : {0, 1} l {0, 1} n which is an (l, m, ɛ, Size(s))-nb-PRG, for l = O(log s + m + log 1 ɛ ). As was already mentioned, in our applications we would typically like to have nb-prgs with polynomial stretch that can be computed in polynomial time. For a summary of the various parameter settings that we obtain see Table 3.1. We note that, somewhat counterintuitively, nb-prgs fooling distinguishers with m = n 0.1 can be turned into nb-prgs fooling distinguishers with m = n 0.9. (This might seem counterintuitive, since a larger m means an easier task for the distinguisher.) Indeed, to get an nb-prg G against distinguishers with m = n 0.9, take the nb-prg G against distinguishers with m = n 0.1 and output only its first n bits. (This nb-prg has output length n = n and fools distinguishers that output n 0.1 = n 0.9 bits.) However, it is 15

28 important to pay attention to the running time and the seed length of the resulting nb- PRG. If, for example, the seed length of G is l = m 2 = n 0.2 we get that the seed length of G is greater than its output length, so it is not a valid nb-prg. For the same reason it is not possible to build an efficient nb-prg G against distinguishers with m = n 0.1 from an nb-prg G against distinguishers with m = log n, where G takes poly(n) time to compute. This is so, since G will not be computable in polynomial time in its output length. Similarly to Observation 2.2, a sufficiently strong classical PRG is also a good nb-prg. Observation 3.1 If G is an (l, ɛ, Size(s+2 m ))-PRG, then G is an (l, m, ɛ, Size(s)) nb P RG. Using this observation we conclude that strong cryptographic assumptions (e.g. exponential OWPs, as in Theorem 2.2) imply the existence of efficient nb-prgs for the polynomial parameter setting. 1 For NW-style PRGs this observation only gives us nb-prgs with m = O(log n). We now construct nb-prgs based on weaker assumptions, namely compression hardness assumptions. First, we note that both the cryptographic and the Nisan-Wigderson constructions have black-box proofs, i.e., the proof shows an oracle circuit that contradicts the hardness assumption given that the oracle breaks the PRG. We also observe that in these proofs the oracle is used in a simple way, for example, in both proofs the circuit calls the oracle only once. We show that proofs of this form can be translated to the setting of function compression. Specifically, we show that if an oracle circuit A B computes some function f, then, given an oracle B that compresses the function of B, it is possible to build an oracle circuit A B that compresses f. The circuit A B works roughly as follows. It performs the computation of A B until it encounters an oracle gate. Then, instead of a B-gate is has a B -gate. At this point it outputs the output of the B -gate concatenated with all the additional information that is needed in order to continue the computation. The solver will later continue the computation from this stage. The connection with nb-prgs follows by observing that a distinguisher for nb-prg actually compresses some function that breaks the PRG in the standard sense. Construction 3.1 Let A B be an oracle circuit. Given an oracle gate B (which can have many output bits) we construct the circuit A B as follows. We start with the circuit A and go over all its oracle gates. For each oracle gate we delete all the paths from it to the output. (By deleting a path we mean deleting all the gates and edges on the path. This leaves some edges hanging and not entering any gate: these are outputs of the new circuit.) We replace the oracle gate by a B gate. (The outputs of B are also outputs of the new circuit.) We now delete all the constant outputs of A. (These are edges of A with constant values that turned into outputs of A.) 1 A similar result can be obtained using exponential-strength one-way functions. However, in this case we have a polynomial blowup in the seed length. For instance, the construction of [16] gives l = O(m 8 ). This polynomial overhead can be reduced using the constructions of [17, 19]. 16

29 Compression Distinguisher Seed Assumption Comments Reference Length (m) Class Length (l) 1 Size(n) O(log n) standard NW: [23, 40] function in E with circuit size 2 Ω(n) log n Size(n) O(log n) standard NW: [23, 40] + function in E Obs. 3.1 with circuit size 2 Ω(n) (log n) c Size(n) O((log n) 2c ) compress NW: Thm. 3.2 function in DT IME(2 O(n1/c) ) which is not n 2 -compressible by Size(O(2 2n1/c )) on n1/c fraction of the inputs n γ Size(2 nγ ) n δ strong crypto: 0 < γ δ < 1 Thm Size(2 Ω(nγ/δ) )-OWP Obs. 3.1 n γ PPTM O(n γ ) compress crypto: 0 < γ < 1 Thm. 3.1 OWP with n 2 -incompressible hard-core bit n γ Size(n) O(n 2γ ) compress NW: γ < 1 2 Thm. 3.2 function in P which is not n 2 -compressible by Size(O(n 2/γ )) on n 2/γ fraction of the inputs n γ Size(n) O(n 2γ+δ ) none 0 < γ < 1 2, Thm. 3.4 Depth(d) δ > 0 Table 3.1: Summary of nb-prgs parameter settings. All the nb-prgs have n output bits and are computable in poly(n) time. All the distinguishers have advantage at most ɛ, where 0 < ɛ < 1 can be arbitrary small constant. 17

30 Note that if A calls B adaptively, i.e., there is a path from one oracle gate to the output that goes through another oracle gate, then the second oracle gate will be deleted when we process the first one. This does not damage the compression, since the solver will continue the original computation from the first oracle gate. Observe also that if {A B } is a P-uniform circuit family, then so is {A B }. Lemma 3.1 Let A B be a probabilistic oracle circuit that computes some function f with probability p on a uniformly chosen input and random coins. Let B be an oracle that compresses the function that B computes. Let A B be the circuit constructed according to Construction 3.1. Let n denote the input length of A B (and of A B ) and denote by m(n) the output length of A B. If m(n) < n, then A B compresses f with probability p. Proof: Since B compresses the function that B computes, there exists a solver S such that S(B (x)) = B(x) for all x. It is possible to construct a solver T such that T (A B ( )) computes f with probability p. The solver T completes the circuit A B to get a circuit T (A B ( )) that is equivalent to A B. The circuit T (A B ( )) is the circuit A except that in place of each B gate it has S(B ( )). Note that each oracle gate of the original circuit (that is not deleted) adds at least one output bit for the new circuit. Therefore this approach does not work when there are too many oracle calls. This is the case, for instance, in proofs of Yao s XOR Lemma [14, 20, 23], which do not efficiently translate to the compression setting via the above paradigm. Another problematic case is when the circuit performs complex computations that involve both the oracle answers and the input. 3.1 Cryptographic nb-prgs We now weaken the assumptions and show that cryptographic nb-prgs exist under polynomial strength cryptographic incompressibility assumptions. Specifically, we prove that cryptographic nb-prgs exist provided there are OWPs with incompressible hard-core bits. Definition 3.2 Let K be a complexity class. Let b be a polynomial time computable predicate, f be a length preserving permutation and m(n) < n for all n. Define g(y) = b(f 1 (y)). We say that b is an (m, K)-incompressible hard-core bit of f, if for every D K, such that D : n m(n), for every polynomial p(n), for sufficiently large n s, D compresses g on average with probability less than p(n). Observe that for m = 1 we get the standard hard-core bit definition. In the sequel when K is omitted we assume it to be probabilistic polynomial time. (Similar results can be obtained for the non-uniform setting, i.e., where K is the class of polynomial-size circuits.) We note that even optimal compression in the sense of Harnik and Naor [18] does not rule out the existence of incompressible hard-core bits. Harnik and Naor mainly consider the compression of N P languages up to the witness length. The N P language {y b(f 1 (y)) = 1} has natural witnesses of length n and we require incompressibility to 18

31 length m < n. So even if optimal compression in the sense of [18] is possible, there still may exist incompressible hard-core bits in the sense required here. We now show that the BMY construction produces an nb-prg if instead of a standard hard-core bit we use an incompressible hard-core bit. The proof of correctness of the BMY construction is a black-box proof, namely given a distinguisher B for the generator as an oracle we build an algorithm A that breaks the hard-core bit. Since the algorithm A is relatively simple (it calls the oracle only once and then performs a simple computation), we can use Observation 3.1 to translate the proof to our setting. Theorem 3.1 Let m : N N be a function such that m(n) < n for all n. If there exists a OWP with an m-incompressible hard-core bit, then for any function l : N N such that l(n) < n and n = l(n) O(1) there exists an (l, m(l(n)) 1)-crypto-nb-PRG G : l(n) n that is computable in poly(n) time. Proof: We use the standard BMY construction [5, 42]. Namely, let f : {0, 1} {0, 1} be a one-way permutation with an m-incompressible hard-core bit b : {0, 1} {0, 1}. We define the nb-prg G as G(x) = b(x) b(f(x))... b(f n 1 (x)). Our goal is to show that G is an (l, m(l) 1)-crypto-nb-PRG. Suppose, for the sake of contradiction, that there exists a PPTM D : n m(l) 1 such that SD(D(G(U l )), D(U n )) 1/poly(n). This means that there exists some T {0, 1} m(l) 1 such that P r[d(g(u l )) T ] P r[d(u n ) T ] 1/poly(n). Define the function S as S(y) = 1 iff y T and define the function B as B(x) = S(D(x)). It holds that D compresses B. We also know that B is a distinguisher for G in the standard sense. The standard proof shows that there exists a PPTM A that given B as an oracle computes b(f 1 ( )) with probability 1/poly(n). The machine A works as follows. On input y it chooses a random bit a, a random 0 j n and random bits r 1,..., r j 1. It then computes out = B(r 1... r j 1 a b(y) b(f(y))... b(f n j 1 (y))). If out = 1 it outputs a and it outputs ā otherwise. Looking at A as an oracle circuit and using Observation 3.1 we conclude that the following machine C compresses b(f 1 ( )) with probability 1/poly(n). On input y the machine C chooses a random bit a, a random 0 j n and random bits r 1,..., r j 1. It then outputs a D(r 1... r j 1 a b(y) b(f(y))... b(f n j 1 (y))). Using the above theorem we get, for example, that if there exists a OWP with an n 2 - incompressible hard-core bit, then there are nb-prgs for the polynomial parameter setting with l = O(m). We note that n = poly(l(n)), even for small values of m (for example, using the theorem we can not get seed length close to m when m = log n). Thus the incompressible hard-core bit assumption is not strong enough to derandomize BPP. 19

32 3.2 Nisan-Wigderson Style nb-prgs As was previously noted, standard NW-style assumptions yield nb-prgs whose computation requires exponential time in the input length, and are thus unsuitable for the polynomial parameter setting. If we use compression complexity assumptions, we can get nb-prgs for the full parameter range. This is shown by the following theorem, which is a generalization of a theorem of Nisan and Wigderson [35]. Theorem 3.2 Let k = k(n), l = l(n) and m = m(k) such that m < k < l < n. Let f be a boolean function, such that for sufficiently large n s f on inputs of length k can not be compressed on average to m bits with probability by circuits of size O(n 2 ). For every n 2 n let A n be a boolean n l(n) matrix which is a (log n, k(n))-design and let A = {A n }. Then G given by G = f A is an (l, m(k), 1 n, Size(n))-nb-PRG. Proof: We again observe that the standard proof is a black-box proof that uses the oracle in a simple way. Suppose, for the sake of contradiction, that there exists some distinguisher D Size(n), D : n m(k), such that SD(D(G(U l )), D(U n )) 1 n. Let T {0, 1}m be a set that gives the statistical distance, meaning P r[d(g(u l )) T ] P r[d(u n ) T ] 1 n. As before, define the function S as S(y) = 1 iff y T and define the function B as B(x) = S(D(x)). It holds that D compresses B and B is a distinguisher for G in the standard sense. The proof of [35] shows how to build a circuit A of size O(n 2 ) that given oracle access to B computes f on inputs of length k with advantage 1. The circuit A works as follows n 2 on input x. It first computes the functions y 1,..., y i 1 (for some 0 i n) such that every y j depends on at most log n bits of the input (the y j s are computed by CNFs). It then computes out = B(y 1,..., y i 1, c i,..., c n ), where the c j s are constants. If out = 1 it outputs c i and it outputs c i otherwise. Using Observation 3.1 we conclude that the following circuit C compresses f with advantage 1. It computes y n 2 1,..., y i 1 similarly to A. It then runs D(y 1,..., y i 1, c i,..., c n ) and outputs its answer. Note that the constant c i does not have to be in the output of C, because it can be wired into the solver circuit. The following are typical parameters. Suppose that there exists some 0 < γ < 1 2 and a function in P which is not compressible to n/2 bits on fraction of the n 2/γ inputs by circuits of size O(n 2/γ ). Then there is an (O(n 2γ ), n γ, 1 n, Size(n))-nb-PRG, that is computable in poly(n) time. We note that the advantage of the distinguisher can be made negligible by using a function that is hard to compress with even negligible advantage nb-prgs for Constant Depth Circuits Similarly to the classical Nisan-Wigderson PRGs, Theorem 3.2 can be used to build nb- PRGs that fool constant depth circuits given a function that is hard to compress by constant depth circuits. This is because the proof of the theorem increases the depth of the circuit 20

33 only by a small constant. In order to build such nb-prgs we obtain unconditional lower bounds on the size of constant depth circuits that compress the parity function. Håstad [15] showed that circuits of depth d and size 2 n1/d can only compute parity with advantage 2 Ω(n1/d). Using this result we conclude that circuits of depth d and size 2 o(n1/(d+2)) that output m = o(n 1/(d+2) ) bits can compress parity with advantage at most 2 Ω(n1/(d+2)). If we build nb-prgs using this result we get a seed length of roughly m d. We would like to get a shorter seed that does not depend on d. Thus we prove a stronger lower bound, namely that small circuits of constant depth can not compress parity on average even slightly, i.e., to m = n δ bits for any 0 < δ < 1. The following theorem summarizes the results that we obtain. Theorem 3.3 Let 0 < δ < 1 be a constant. Let C : {0, 1} n {0, 1} m be a circuit of depth d, where m = n δ. Then the following holds. 1) For d = 1 the advantage of C in compressing parity is at most 2 Ω(n nδ log n). 2) For d 2, for C of size 2 O(n(1 δ)/d) the advantage of C in compressing parity is at most 2 Ω(n(1 δ)/d). We prove this theorem in Chapter 4. The following theorem shows the existence of unconditional nb-prgs fooling constant depth circuits. Theorem 3.4 Let d N, 0 < δ < 1 be constants and let k : N N be a function such that log n k < n and n 2 1 δ = o(2k(n) d+2 ). There exists a function G which is an (l, m(k), 1 n, Size(n) Depth(d))-nb-PRG, for l = O(k2 ) and m(k) = k δ. G is computable in polynomial time in n. Proof: Let A = {A n } be a collection of boolean n l(n) matrices which are (log n, k)- designs (such matrices exist by Lemma 2.1). We define G(x) = parity A (x). Clearly, G is computable in space log n. The proof of Theorem 3.2 works also for constant depth circuits, since, similarly to the original proof of [35], it increases the depth of the circuit from d to d + 2. We therefore get an nb-prg against small constant depth circuits that output m bits with seed length of O(m 2+δ ) for any constant 0 < δ < 1. 21

34 22

35 Chapter 4 Compression Lower Bounds for Parity We extend the parity lower bounds of Håstad [15] to function compression. We assume that the solver is unbounded and prove lower bounds on compressing circuits for exact and average-case compression of parity. We note that the following upper bounds can be obtained. For d = 1 a circuit that outputs the first n δ 1 bits of its input concatenated with an AND of the remaining bits has advantage 2 Ω(n nδ). For d 2 it is known that parity on n bits can be computed by circuits of depth d and size 2 O(n1/(d 1)) [15]. Thus by dividing the input into n δ sets of size n 1 δ each and computing the parity of each set we get a circuit of size 2 O(n(1 δ)/(d 1)) that compresses parity exactly to n δ bits. Notation 4.1 In this chapter we omit the superscript parity and write Υ S (C), Υ(C), α(c), (C), zero C (y) and one C (y). 4.1 Exact Compression of Parity In this section we prove the following theorem. Theorem 4.1 A circuit C : {0, 1} n {0, 1} nδ 1 δ must have size 2Ω(n d ). of depth d that compresses parity exactly d We later prove a stronger statement, namely that circuits of size 2 ) can not compress parity even on average. However, for exact compression lower bounds a simpler technique can be used, therefore we present it here. We use the connection between the sensitivity of boolean functions and their constant depth circuit size introduced by Linial et al. [33]. O(n 1 δ Definition 4.1 Let f : {0, 1} n {0, 1} m be a function and x {0, 1} n. The sensitivity of f on x, s x (f), is the number of Hamming neighbors x of x such that f(x) f(x ). The average sensitivity of f, s(f) is defined as s(f) = 1 2 n x s x(f). 23

On Pseudorandomness w.r.t Deterministic Observers

On Pseudorandomness w.r.t Deterministic Observers On Pseudorandomness w.r.t Deterministic Observers Oded Goldreich Department of Computer Science Weizmann Institute of Science Rehovot, Israel. oded@wisdom.weizmann.ac.il Avi Wigderson The Hebrew University,

More information

Lecture 3: Randomness in Computation

Lecture 3: Randomness in Computation Great Ideas in Theoretical Computer Science Summer 2013 Lecture 3: Randomness in Computation Lecturer: Kurt Mehlhorn & He Sun Randomness is one of basic resources and appears everywhere. In computer science,

More information

Inaccessible Entropy and its Applications. 1 Review: Psedorandom Generators from One-Way Functions

Inaccessible Entropy and its Applications. 1 Review: Psedorandom Generators from One-Way Functions Columbia University - Crypto Reading Group Apr 27, 2011 Inaccessible Entropy and its Applications Igor Carboni Oliveira We summarize the constructions of PRGs from OWFs discussed so far and introduce the

More information

Two Comments on Targeted Canonical Derandomizers

Two Comments on Targeted Canonical Derandomizers Two Comments on Targeted Canonical Derandomizers Oded Goldreich Department of Computer Science Weizmann Institute of Science Rehovot, Israel. oded.goldreich@weizmann.ac.il April 8, 2011 Abstract We revisit

More information

: On the P vs. BPP problem. 18/12/16 Lecture 10

: On the P vs. BPP problem. 18/12/16 Lecture 10 03684155: On the P vs. BPP problem. 18/12/16 Lecture 10 Natural proofs Amnon Ta-Shma and Dean Doron 1 Natural proofs The ultimate goal we have is separating classes (or proving they are equal if they are).

More information

Majority is incompressible by AC 0 [p] circuits

Majority is incompressible by AC 0 [p] circuits Majority is incompressible by AC 0 [p] circuits Igor Carboni Oliveira Columbia University Joint work with Rahul Santhanam (Univ. Edinburgh) 1 Part 1 Background, Examples, and Motivation 2 Basic Definitions

More information

The Generalized Randomized Iterate and its Application to New Efficient Constructions of UOWHFs from Regular One-Way Functions

The Generalized Randomized Iterate and its Application to New Efficient Constructions of UOWHFs from Regular One-Way Functions The Generalized Randomized Iterate and its Application to New Efficient Constructions of UOWHFs from Regular One-Way Functions Scott Ames 1, Rosario Gennaro 2, and Muthuramakrishnan Venkitasubramaniam

More information

CSC 2429 Approaches to the P vs. NP Question and Related Complexity Questions Lecture 2: Switching Lemma, AC 0 Circuit Lower Bounds

CSC 2429 Approaches to the P vs. NP Question and Related Complexity Questions Lecture 2: Switching Lemma, AC 0 Circuit Lower Bounds CSC 2429 Approaches to the P vs. NP Question and Related Complexity Questions Lecture 2: Switching Lemma, AC 0 Circuit Lower Bounds Lecturer: Toniann Pitassi Scribe: Robert Robere Winter 2014 1 Switching

More information

CSC 5170: Theory of Computational Complexity Lecture 9 The Chinese University of Hong Kong 15 March 2010

CSC 5170: Theory of Computational Complexity Lecture 9 The Chinese University of Hong Kong 15 March 2010 CSC 5170: Theory of Computational Complexity Lecture 9 The Chinese University of Hong Kong 15 March 2010 We now embark on a study of computational classes that are more general than NP. As these classes

More information

Notes on Computer Theory Last updated: November, Circuits

Notes on Computer Theory Last updated: November, Circuits Notes on Computer Theory Last updated: November, 2015 Circuits Notes by Jonathan Katz, lightly edited by Dov Gordon. 1 Circuits Boolean circuits offer an alternate model of computation: a non-uniform one

More information

Last time, we described a pseudorandom generator that stretched its truly random input by one. If f is ( 1 2

Last time, we described a pseudorandom generator that stretched its truly random input by one. If f is ( 1 2 CMPT 881: Pseudorandomness Prof. Valentine Kabanets Lecture 20: N W Pseudorandom Generator November 25, 2004 Scribe: Ladan A. Mahabadi 1 Introduction In this last lecture of the course, we ll discuss the

More information

THE COMPLEXITY OF CONSTRUCTING PSEUDORANDOM GENERATORS FROM HARD FUNCTIONS

THE COMPLEXITY OF CONSTRUCTING PSEUDORANDOM GENERATORS FROM HARD FUNCTIONS comput. complex. 13 (2004), 147 188 1016-3328/04/030147 42 DOI 10.1007/s00037-004-0187-1 c Birkhäuser Verlag, Basel 2004 computational complexity THE COMPLEXITY OF CONSTRUCTING PSEUDORANDOM GENERATORS

More information

Incompressible Functions, Relative-Error Extractors, and the Power of Nondeterminsitic Reductions

Incompressible Functions, Relative-Error Extractors, and the Power of Nondeterminsitic Reductions Electronic Colloquium on Computational Complexity, Revision 1 of Report No. 51 (2015) Incompressible Functions, Relative-Error Extractors, and the Power of Nondeterminsitic Reductions Benny Applebaum Sergei

More information

Worst-Case to Average-Case Reductions Revisited

Worst-Case to Average-Case Reductions Revisited Worst-Case to Average-Case Reductions Revisited Dan Gutfreund 1 and Amnon Ta-Shma 2 1 SEAS, Harvard University, Cambridge, MA 02138 danny@eecs.harvard.edu 2 Computer Science Department, Tel-Aviv University,

More information

1 Distributional problems

1 Distributional problems CSCI 5170: Computational Complexity Lecture 6 The Chinese University of Hong Kong, Spring 2016 23 February 2016 The theory of NP-completeness has been applied to explain why brute-force search is essentially

More information

CS294: Pseudorandomness and Combinatorial Constructions September 13, Notes for Lecture 5

CS294: Pseudorandomness and Combinatorial Constructions September 13, Notes for Lecture 5 UC Berkeley Handout N5 CS94: Pseudorandomness and Combinatorial Constructions September 3, 005 Professor Luca Trevisan Scribe: Gatis Midrijanis Notes for Lecture 5 In the few lectures we are going to look

More information

Pseudorandom Generators

Pseudorandom Generators 8 Pseudorandom Generators Great Ideas in Theoretical Computer Science Saarland University, Summer 2014 andomness is one of the fundamental computational resources and appears everywhere. In computer science,

More information

Journal of Computer and System Sciences

Journal of Computer and System Sciences Journal of Computer and System Sciences 77 (2011) 91 106 Contents lists available at ScienceDirect Journal of Computer and System Sciences www.elsevier.com/locate/jcss Infeasibility of instance compression

More information

In Search of an Easy Witness: Exponential Time vs. Probabilistic Polynomial Time

In Search of an Easy Witness: Exponential Time vs. Probabilistic Polynomial Time In Search of an Easy Witness: Exponential Time vs. Probabilistic Polynomial Time Russell Impagliazzo Department of Computer Science University of California, San Diego La Jolla, CA 92093-0114 russell@cs.ucsd.edu

More information

Communication vs. Computation

Communication vs. Computation Communication vs. Computation Prahladh Harsha Yuval Ishai Joe Kilian Kobbi Nissim S. Venkatesh October 18, 2005 Abstract We initiate a study of tradeoffs between communication and computation in well-known

More information

Lecture 14: Cryptographic Hash Functions

Lecture 14: Cryptographic Hash Functions CSE 599b: Cryptography (Winter 2006) Lecture 14: Cryptographic Hash Functions 17 February 2006 Lecturer: Paul Beame Scribe: Paul Beame 1 Hash Function Properties A hash function family H = {H K } K K is

More information

Notes for Lecture 3... x 4

Notes for Lecture 3... x 4 Stanford University CS254: Computational Complexity Notes 3 Luca Trevisan January 14, 2014 Notes for Lecture 3 In this lecture we introduce the computational model of boolean circuits and prove that polynomial

More information

COS598D Lecture 3 Pseudorandom generators from one-way functions

COS598D Lecture 3 Pseudorandom generators from one-way functions COS598D Lecture 3 Pseudorandom generators from one-way functions Scribe: Moritz Hardt, Srdjan Krstic February 22, 2008 In this lecture we prove the existence of pseudorandom-generators assuming that oneway

More information

Lecture 21: P vs BPP 2

Lecture 21: P vs BPP 2 Advanced Complexity Theory Spring 206 Prof. Dana Moshkovitz Lecture 2: P vs BPP 2 Overview In the previous lecture, we began our discussion of pseudorandomness. We presented the Blum- Micali definition

More information

On the Power of the Randomized Iterate

On the Power of the Randomized Iterate On the Power of the Randomized Iterate Iftach Haitner Danny Harnik Omer Reingold August 21, 2006 Abstract We consider two of the most fundamental theorems in Cryptography. The first, due to Håstad et.

More information

Pseudorandom Generators

Pseudorandom Generators Outlines Saint Petersburg State University, Mathematics and Mechanics 2nd April 2005 Outlines Part I: Main Approach Part II: Blum-Blum-Shub Generator Part III: General Concepts of Pseudorandom Generator

More information

Pseudorandom Generators

Pseudorandom Generators Principles of Construction and Usage of Pseudorandom Generators Alexander Vakhitov June 13, 2005 Abstract In this report we try to talk about the main concepts and tools needed in pseudorandom generators

More information

Questions Pool. Amnon Ta-Shma and Dean Doron. January 2, Make sure you know how to solve. Do not submit.

Questions Pool. Amnon Ta-Shma and Dean Doron. January 2, Make sure you know how to solve. Do not submit. Questions Pool Amnon Ta-Shma and Dean Doron January 2, 2017 General guidelines The questions fall into several categories: (Know). (Mandatory). (Bonus). Make sure you know how to solve. Do not submit.

More information

If NP languages are hard on the worst-case then it is easy to find their hard instances

If NP languages are hard on the worst-case then it is easy to find their hard instances If NP languages are hard on the worst-case then it is easy to find their hard instances Dan Gutfreund School of Computer Science and Engineering, The Hebrew University of Jerusalem, Israel, 91904 danig@cs.huji.ac.il

More information

1 Randomized Computation

1 Randomized Computation CS 6743 Lecture 17 1 Fall 2007 1 Randomized Computation Why is randomness useful? Imagine you have a stack of bank notes, with very few counterfeit ones. You want to choose a genuine bank note to pay at

More information

COMPRESSION OF SAMPLABLE SOURCES

COMPRESSION OF SAMPLABLE SOURCES COMPRESSION OF SAMPLABLE SOURCES Luca Trevisan, Salil Vadhan, and David Zuckerman Abstract. We study the compression of polynomially samplable sources. In particular, we give efficient prefix-free compression

More information

Lecture 2: Program Obfuscation - II April 1, 2009

Lecture 2: Program Obfuscation - II April 1, 2009 Advanced Topics in Cryptography Lecture 2: Program Obfuscation - II April 1, 2009 Lecturer: S. Goldwasser, M. Naor Scribe by: R. Marianer, R. Rothblum Updated: May 3, 2009 1 Introduction Barak et-al[1]

More information

Notes for Lecture 3... x 4

Notes for Lecture 3... x 4 Stanford University CS254: Computational Complexity Notes 3 Luca Trevisan January 18, 2012 Notes for Lecture 3 In this lecture we introduce the computational model of boolean circuits and prove that polynomial

More information

20.1 2SAT. CS125 Lecture 20 Fall 2016

20.1 2SAT. CS125 Lecture 20 Fall 2016 CS125 Lecture 20 Fall 2016 20.1 2SAT We show yet another possible way to solve the 2SAT problem. Recall that the input to 2SAT is a logical expression that is the conunction (AND) of a set of clauses,

More information

Computational Analogues of Entropy

Computational Analogues of Entropy Computational Analogues of Entropy Boaz Barak Ronen Shaltiel Avi Wigderson December 5, 2003 Abstract Min-entropy is a statistical measure of the amount of randomness that a particular distribution contains.

More information

Lecture 3: AC 0, the switching lemma

Lecture 3: AC 0, the switching lemma Lecture 3: AC 0, the switching lemma Topics in Complexity Theory and Pseudorandomness (Spring 2013) Rutgers University Swastik Kopparty Scribes: Meng Li, Abdul Basit 1 Pseudorandom sets We start by proving

More information

Where do pseudo-random generators come from?

Where do pseudo-random generators come from? Computer Science 2426F Fall, 2018 St. George Campus University of Toronto Notes #6 (for Lecture 9) Where do pseudo-random generators come from? Later we will define One-way Functions: functions that are

More information

HARDNESS AMPLIFICATION VIA SPACE-EFFICIENT DIRECT PRODUCTS

HARDNESS AMPLIFICATION VIA SPACE-EFFICIENT DIRECT PRODUCTS HARDNESS AMPLIFICATION VIA SPACE-EFFICIENT DIRECT PRODUCTS Venkatesan Guruswami and Valentine Kabanets Abstract. We prove a version of the derandomized Direct Product lemma for deterministic space-bounded

More information

A list-decodable code with local encoding and decoding

A list-decodable code with local encoding and decoding A list-decodable code with local encoding and decoding Marius Zimand Towson University Department of Computer and Information Sciences Baltimore, MD http://triton.towson.edu/ mzimand Abstract For arbitrary

More information

Randomness and non-uniformity

Randomness and non-uniformity Randomness and non-uniformity JASS 2006 Course 1: Proofs and Computers Felix Weninger TU München April 2006 Outline Randomized computation 1 Randomized computation 2 Computation with advice Non-uniform

More information

CS151 Complexity Theory. Lecture 9 May 1, 2017

CS151 Complexity Theory. Lecture 9 May 1, 2017 CS151 Complexity Theory Lecture 9 Hardness vs. randomness We have shown: If one-way permutations exist then BPP δ>0 TIME(2 nδ ) ( EXP simulation is better than brute force, but just barely stronger assumptions

More information

Computer Science Dept.

Computer Science Dept. A NOTE ON COMPUTATIONAL INDISTINGUISHABILITY 1 Oded Goldreich Computer Science Dept. Technion, Haifa, Israel ABSTRACT We show that following two conditions are equivalent: 1) The existence of pseudorandom

More information

6.842 Randomness and Computation Lecture 5

6.842 Randomness and Computation Lecture 5 6.842 Randomness and Computation 2012-02-22 Lecture 5 Lecturer: Ronitt Rubinfeld Scribe: Michael Forbes 1 Overview Today we will define the notion of a pairwise independent hash function, and discuss its

More information

COMMUNICATION VS. COMPUTATION

COMMUNICATION VS. COMPUTATION comput. complex. 16 (2007), 1 33 1016-3328/07/010001-33 DOI 10.1007/s00037-007-0224-y c Birkhäuser Verlag, Basel 2007 computational complexity COMMUNICATION VS. COMPUTATION Prahladh Harsha, Yuval Ishai,

More information

Pseudorandomness and Combinatorial Constructions

Pseudorandomness and Combinatorial Constructions Electronic Colloquium on Computational Complexity, Report No. 13 (2006) Pseudorandomness and Combinatorial Constructions Luca Trevisan January 23, 2006 Abstract In combinatorics, the probabilistic method

More information

1 Nisan-Wigderson pseudorandom generator

1 Nisan-Wigderson pseudorandom generator CSG399: Gems of Theoretical Computer Science. Lecture 3. Jan. 6, 2009. Instructor: Emanuele Viola Scribe: Dimitrios Kanoulas Nisan-Wigderson pseudorandom generator and design constuction Nisan-Wigderson

More information

Pseudorandom Generators and Typically-Correct Derandomization

Pseudorandom Generators and Typically-Correct Derandomization Pseudorandom Generators and Typically-Correct Derandomization Jeff Kinne 1, Dieter van Melkebeek 1, and Ronen Shaltiel 2 1 Department of Computer Sciences, University of Wisconsin-Madison, USA {jkinne,dieter}@cs.wisc.edu

More information

A Composition Theorem for Universal One-Way Hash Functions

A Composition Theorem for Universal One-Way Hash Functions A Composition Theorem for Universal One-Way Hash Functions Victor Shoup IBM Zurich Research Lab, Säumerstr. 4, 8803 Rüschlikon, Switzerland sho@zurich.ibm.com Abstract. In this paper we present a new scheme

More information

Explicit List-Decodable Codes with Optimal Rate for Computationally Bounded Channels

Explicit List-Decodable Codes with Optimal Rate for Computationally Bounded Channels Explicit List-Decodable Codes with Optimal Rate for Computationally Bounded Channels Ronen Shaltiel 1 and Jad Silbak 2 1 Department of Computer Science, University of Haifa, Israel ronen@cs.haifa.ac.il

More information

Lecture 4. 1 Circuit Complexity. Notes on Complexity Theory: Fall 2005 Last updated: September, Jonathan Katz

Lecture 4. 1 Circuit Complexity. Notes on Complexity Theory: Fall 2005 Last updated: September, Jonathan Katz Notes on Complexity Theory: Fall 2005 Last updated: September, 2005 Jonathan Katz Lecture 4 1 Circuit Complexity Circuits are directed, acyclic graphs where nodes are called gates and edges are called

More information

Probabilistically Checkable Arguments

Probabilistically Checkable Arguments Probabilistically Checkable Arguments Yael Tauman Kalai Microsoft Research yael@microsoft.com Ran Raz Weizmann Institute of Science ran.raz@weizmann.ac.il Abstract We give a general reduction that converts

More information

On Pseudorandom Generators with Linear Stretch in NC 0

On Pseudorandom Generators with Linear Stretch in NC 0 On Pseudorandom Generators with Linear Stretch in NC 0 Benny Applebaum, Yuval Ishai, and Eyal Kushilevitz Computer Science Department, Technion, Haifa 32000, Israel {abenny,yuvali,eyalk}@technion.ac.il

More information

Kolmogorov Complexity in Randomness Extraction

Kolmogorov Complexity in Randomness Extraction LIPIcs Leibniz International Proceedings in Informatics Kolmogorov Complexity in Randomness Extraction John M. Hitchcock, A. Pavan 2, N. V. Vinodchandran 3 Department of Computer Science, University of

More information

From Non-Adaptive to Adaptive Pseudorandom Functions

From Non-Adaptive to Adaptive Pseudorandom Functions From Non-Adaptive to Adaptive Pseudorandom Functions Itay Berman Iftach Haitner January, 202 Abstract Unlike the standard notion of pseudorandom functions (PRF), a non-adaptive PRF is only required to

More information

Stanford University CS254: Computational Complexity Handout 8 Luca Trevisan 4/21/2010

Stanford University CS254: Computational Complexity Handout 8 Luca Trevisan 4/21/2010 Stanford University CS254: Computational Complexity Handout 8 Luca Trevisan 4/2/200 Counting Problems Today we describe counting problems and the class #P that they define, and we show that every counting

More information

Limits on the Stretch of Non-Adaptive Constructions of Pseudo-Random Generators

Limits on the Stretch of Non-Adaptive Constructions of Pseudo-Random Generators Limits on the Stretch of Non-Adaptive Constructions of Pseudo-Random Generators Josh Bronson 1, Ali Juma 2, and Periklis A. Papakonstantinou 3 1 HP TippingPoint josh.t.bronson@hp.com 2 University of Toronto

More information

Lecture 23: Alternation vs. Counting

Lecture 23: Alternation vs. Counting CS 710: Complexity Theory 4/13/010 Lecture 3: Alternation vs. Counting Instructor: Dieter van Melkebeek Scribe: Jeff Kinne & Mushfeq Khan We introduced counting complexity classes in the previous lecture

More information

On the Power of the Randomized Iterate

On the Power of the Randomized Iterate On the Power of the Randomized Iterate Iftach Haitner 1, Danny Harnik 2, Omer Reingold 3 1 Dept. of Computer Science and Applied Math., Weizmann Institute of Science, Rehovot, Israel. iftach.haitner@weizmann.ac.il.

More information

an efficient procedure for the decision problem. We illustrate this phenomenon for the Satisfiability problem.

an efficient procedure for the decision problem. We illustrate this phenomenon for the Satisfiability problem. 1 More on NP In this set of lecture notes, we examine the class NP in more detail. We give a characterization of NP which justifies the guess and verify paradigm, and study the complexity of solving search

More information

Pseudorandom Generators

Pseudorandom Generators CS276: Cryptography September 8, 2015 Pseudorandom Generators Instructor: Alessandro Chiesa Scribe: Tobias Boelter Context and Summary In the last lecture we have had a loo at the universal one-way function,

More information

Lectures One Way Permutations, Goldreich Levin Theorem, Commitments

Lectures One Way Permutations, Goldreich Levin Theorem, Commitments Lectures 11 12 - One Way Permutations, Goldreich Levin Theorem, Commitments Boaz Barak March 10, 2010 From time immemorial, humanity has gotten frequent, often cruel, reminders that many things are easier

More information

Length-Increasing Reductions for PSPACE-Completeness

Length-Increasing Reductions for PSPACE-Completeness Length-Increasing Reductions for PSPACE-Completeness John M. Hitchcock 1 and A. Pavan 2 1 Department of Computer Science, University of Wyoming. jhitchco@cs.uwyo.edu 2 Department of Computer Science, Iowa

More information

1 Agenda. 2 History. 3 Probabilistically Checkable Proofs (PCPs). Lecture Notes Definitions. PCPs. Approximation Algorithms.

1 Agenda. 2 History. 3 Probabilistically Checkable Proofs (PCPs). Lecture Notes Definitions. PCPs. Approximation Algorithms. CS 221: Computational Complexity Prof. Salil Vadhan Lecture Notes 20 April 12, 2010 Scribe: Jonathan Pines 1 Agenda. PCPs. Approximation Algorithms. PCPs = Inapproximability. 2 History. First, some history

More information

Making Hard Problems Harder

Making Hard Problems Harder Electronic Colloquium on Computational Complexity, Report No. 3 (2006) Making Hard Problems Harder Joshua Buresh-Oppenheim Simon Fraser University jburesho@cs.sfu.ca Rahul Santhanam Simon Fraser University

More information

Randomness and non-uniformity

Randomness and non-uniformity Randomness and non-uniformity Felix Weninger April 2006 Abstract In the first part, we introduce randomized algorithms as a new notion of efficient algorithms for decision problems. We classify randomized

More information

UC Berkeley CS 170: Efficient Algorithms and Intractable Problems Handout 22 Lecturer: David Wagner April 24, Notes 22 for CS 170

UC Berkeley CS 170: Efficient Algorithms and Intractable Problems Handout 22 Lecturer: David Wagner April 24, Notes 22 for CS 170 UC Berkeley CS 170: Efficient Algorithms and Intractable Problems Handout 22 Lecturer: David Wagner April 24, 2003 Notes 22 for CS 170 1 NP-completeness of Circuit-SAT We will prove that the circuit satisfiability

More information

Complete problems for classes in PH, The Polynomial-Time Hierarchy (PH) oracle is like a subroutine, or function in

Complete problems for classes in PH, The Polynomial-Time Hierarchy (PH) oracle is like a subroutine, or function in Oracle Turing Machines Nondeterministic OTM defined in the same way (transition relation, rather than function) oracle is like a subroutine, or function in your favorite PL but each call counts as single

More information

Pseudorandomness and combinatorial constructions

Pseudorandomness and combinatorial constructions Pseudorandomness and combinatorial constructions Luca Trevisan Abstract. In combinatorics, the probabilistic method is a very powerful tool to prove the existence of combinatorial objects with interesting

More information

Lecture 10 - MAC s continued, hash & MAC

Lecture 10 - MAC s continued, hash & MAC Lecture 10 - MAC s continued, hash & MAC Boaz Barak March 3, 2010 Reading: Boneh-Shoup chapters 7,8 The field GF(2 n ). A field F is a set with a multiplication ( ) and addition operations that satisfy

More information

Pseudorandomness When the Odds are Against You

Pseudorandomness When the Odds are Against You Pseudorandomness When the Odds are Against You Sergei Aemenko 1, Russell Impagliazzo 2, Valentine Kabanets 3, and Ronen Shaltiel 4 1 Depament of Computer Science, University of Haifa, Haifa, Israel saemen@gmail.com

More information

CS 355: TOPICS IN CRYPTOGRAPHY

CS 355: TOPICS IN CRYPTOGRAPHY CS 355: TOPICS IN CRYPTOGRAPHY DAVID WU Abstract. Preliminary notes based on course material from Professor Boneh s Topics in Cryptography course (CS 355) in Spring, 2014. There are probably typos. Last

More information

Computationally Private Randomizing Polynomials and Their Applications

Computationally Private Randomizing Polynomials and Their Applications Computationally Private Randomizing Polynomials and Their Applications (EXTENDED ABSTRACT) Benny Applebaum Yuval Ishai Eyal Kushilevitz Computer Science Department, Technion {abenny,yuvali,eyalk}@cs.technion.ac.il

More information

Umans Complexity Theory Lectures

Umans Complexity Theory Lectures Complexity Theory Umans Complexity Theory Lectures Lecture 1a: Problems and Languages Classify problems according to the computational resources required running time storage space parallelism randomness

More information

Near-Optimal Secret Sharing and Error Correcting Codes in AC 0

Near-Optimal Secret Sharing and Error Correcting Codes in AC 0 Near-Optimal Secret Sharing and Error Correcting Codes in AC 0 Kuan Cheng Yuval Ishai Xin Li December 18, 2017 Abstract We study the question of minimizing the computational complexity of (robust) secret

More information

Parallel Repetition of Zero-Knowledge Proofs and the Possibility of Basing Cryptography on NP-Hardness

Parallel Repetition of Zero-Knowledge Proofs and the Possibility of Basing Cryptography on NP-Hardness Parallel Repetition of Zero-Knowledge Proofs and the Possibility of Basing Cryptography on NP-Hardness Rafael Pass Cornell University rafael@cs.cornell.edu January 29, 2007 Abstract Two long-standing open

More information

1 Cryptographic hash functions

1 Cryptographic hash functions CSCI 5440: Cryptography Lecture 6 The Chinese University of Hong Kong 23 February 2011 1 Cryptographic hash functions Last time we saw a construction of message authentication codes (MACs) for fixed-length

More information

COMPUTATIONAL COMPLEXITY

COMPUTATIONAL COMPLEXITY COMPUTATIONAL COMPLEXITY A Modern Approach SANJEEV ARORA Princeton University BOAZ BARAK Princeton University {Щ CAMBRIDGE Щ0 UNIVERSITY PRESS Contents About this book Acknowledgments Introduction page

More information

IS VALIANT VAZIRANI S ISOLATION PROBABILITY IMPROVABLE? Holger Dell, Valentine Kabanets, Dieter van Melkebeek, and Osamu Watanabe December 31, 2012

IS VALIANT VAZIRANI S ISOLATION PROBABILITY IMPROVABLE? Holger Dell, Valentine Kabanets, Dieter van Melkebeek, and Osamu Watanabe December 31, 2012 IS VALIANT VAZIRANI S ISOLATION PROBABILITY IMPROVABLE? Holger Dell, Valentine Kabanets, Dieter van Melkebeek, and Osamu Watanabe December 31, 2012 Abstract. The Isolation Lemma of Valiant & Vazirani (1986)

More information

Computational Complexity: A Modern Approach

Computational Complexity: A Modern Approach i Computational Complexity: A Modern Approach Sanjeev Arora and Boaz Barak Princeton University http://www.cs.princeton.edu/theory/complexity/ complexitybook@gmail.com Not to be reproduced or distributed

More information

CS 151 Complexity Theory Spring Solution Set 5

CS 151 Complexity Theory Spring Solution Set 5 CS 151 Complexity Theory Spring 2017 Solution Set 5 Posted: May 17 Chris Umans 1. We are given a Boolean circuit C on n variables x 1, x 2,..., x n with m, and gates. Our 3-CNF formula will have m auxiliary

More information

Essential facts about NP-completeness:

Essential facts about NP-completeness: CMPSCI611: NP Completeness Lecture 17 Essential facts about NP-completeness: Any NP-complete problem can be solved by a simple, but exponentially slow algorithm. We don t have polynomial-time solutions

More information

On the optimal compression of sets in P, NP, P/poly, PSPACE/poly

On the optimal compression of sets in P, NP, P/poly, PSPACE/poly On the optimal compression of sets in P, NP, P/poly, PSPACE/poly Marius Zimand Towson University CCR 2012- Cambridge Marius Zimand (Towson U.) Compression P, NP, P/poly sets 2011 1 / 19 The language compression

More information

Bootstrapping Obfuscators via Fast Pseudorandom Functions

Bootstrapping Obfuscators via Fast Pseudorandom Functions Bootstrapping Obfuscators via Fast Pseudorandom Functions Benny Applebaum October 26, 2013 Abstract We show that it is possible to upgrade an obfuscator for a weak complexity class WEAK into an obfuscator

More information

Lecture Examples of problems which have randomized algorithms

Lecture Examples of problems which have randomized algorithms 6.841 Advanced Complexity Theory March 9, 2009 Lecture 10 Lecturer: Madhu Sudan Scribe: Asilata Bapat Meeting to talk about final projects on Wednesday, 11 March 2009, from 5pm to 7pm. Location: TBA. Includes

More information

CISC 876: Kolmogorov Complexity

CISC 876: Kolmogorov Complexity March 27, 2007 Outline 1 Introduction 2 Definition Incompressibility and Randomness 3 Prefix Complexity Resource-Bounded K-Complexity 4 Incompressibility Method Gödel s Incompleteness Theorem 5 Outline

More information

18.5 Crossings and incidences

18.5 Crossings and incidences 18.5 Crossings and incidences 257 The celebrated theorem due to P. Turán (1941) states: if a graph G has n vertices and has no k-clique then it has at most (1 1/(k 1)) n 2 /2 edges (see Theorem 4.8). Its

More information

On Basing Lower-Bounds for Learning on Worst-Case Assumptions

On Basing Lower-Bounds for Learning on Worst-Case Assumptions On Basing Lower-Bounds for Learning on Worst-Case Assumptions Benny Applebaum Boaz Barak David Xiao Abstract We consider the question of whether P NP implies that there exists some concept class that is

More information

Average-Case Complexity

Average-Case Complexity Foundations and Trends R in Theoretical Computer Science Vol. 2, No 1 (2006) 1 106 c 2006 A. Bogdanov and L. Trevisan DOI: 10.1561/0400000004 Average-Case Complexity Andrej Bogdanov 1 and Luca Trevisan

More information

Lecture 29: Computational Learning Theory

Lecture 29: Computational Learning Theory CS 710: Complexity Theory 5/4/2010 Lecture 29: Computational Learning Theory Instructor: Dieter van Melkebeek Scribe: Dmitri Svetlov and Jake Rosin Today we will provide a brief introduction to computational

More information

CS151 Complexity Theory. Lecture 14 May 17, 2017

CS151 Complexity Theory. Lecture 14 May 17, 2017 CS151 Complexity Theory Lecture 14 May 17, 2017 IP = PSPACE Theorem: (Shamir) IP = PSPACE Note: IP PSPACE enumerate all possible interactions, explicitly calculate acceptance probability interaction extremely

More information

Two Query PCP with Sub-Constant Error

Two Query PCP with Sub-Constant Error Electronic Colloquium on Computational Complexity, Report No 71 (2008) Two Query PCP with Sub-Constant Error Dana Moshkovitz Ran Raz July 28, 2008 Abstract We show that the N P-Complete language 3SAT has

More information

Uniform Derandomization

Uniform Derandomization Uniform Derandomization Simulation of BPP, RP and AM under Uniform Assumptions A. Antonopoulos (N.T.U.A.) Computation and Reasoning Laboratory 1 Uniform Derandomization of BPP Main Theorem Proof: Step

More information

2 Natural Proofs: a barrier for proving circuit lower bounds

2 Natural Proofs: a barrier for proving circuit lower bounds Topics in Theoretical Computer Science April 4, 2016 Lecturer: Ola Svensson Lecture 6 (Notes) Scribes: Ola Svensson Disclaimer: These notes were written for the lecturer only and may contain inconsistent

More information

CS278: Computational Complexity Spring Luca Trevisan

CS278: Computational Complexity Spring Luca Trevisan CS278: Computational Complexity Spring 2001 Luca Trevisan These are scribed notes from a graduate course on Computational Complexity offered at the University of California at Berkeley in the Spring of

More information

Authentication. Chapter Message Authentication

Authentication. Chapter Message Authentication Chapter 5 Authentication 5.1 Message Authentication Suppose Bob receives a message addressed from Alice. How does Bob ensure that the message received is the same as the message sent by Alice? For example,

More information

On the Compressibility of N P Instances and Cryptographic Applications

On the Compressibility of N P Instances and Cryptographic Applications On the Compressibility of N P Instances and Cryptographic Applications Danny Harnik Moni Naor Abstract We study compression that preserves the solution to an instance of a problem rather than preserving

More information

Nondeterministic Circuit Lower Bounds from Mildly Derandomizing Arthur-Merlin Games

Nondeterministic Circuit Lower Bounds from Mildly Derandomizing Arthur-Merlin Games Nondeterministic Circuit Lower Bounds from Mildly Derandomizing Arthur-Merlin Games Barış Aydınlıo glu Department of Computer Sciences University of Wisconsin Madison, WI 53706, USA baris@cs.wisc.edu Dieter

More information

Ma/CS 117c Handout # 5 P vs. NP

Ma/CS 117c Handout # 5 P vs. NP Ma/CS 117c Handout # 5 P vs. NP We consider the possible relationships among the classes P, NP, and co-np. First we consider properties of the class of NP-complete problems, as opposed to those which are

More information

1 Circuit Complexity. CS 6743 Lecture 15 1 Fall Definitions

1 Circuit Complexity. CS 6743 Lecture 15 1 Fall Definitions CS 6743 Lecture 15 1 Fall 2007 1 Circuit Complexity 1.1 Definitions A Boolean circuit C on n inputs x 1,..., x n is a directed acyclic graph (DAG) with n nodes of in-degree 0 (the inputs x 1,..., x n ),

More information