CS294: Pseudorandomness and Combinatorial Constructions September 13, Notes for Lecture 5

Similar documents
Notes for Lecture 3... x 4

Notes for Lecture 3... x 4

1 Computational Problems

Notes for Lecture 7. 1 Increasing the Stretch of Pseudorandom Generators

U.C. Berkeley CS278: Computational Complexity Professor Luca Trevisan August 30, Notes for Lecture 1

Lecture 3 (Notes) 1. The book Computational Complexity: A Modern Approach by Sanjeev Arora and Boaz Barak;

Notes for Lecture 14 v0.9

Notes for Lecture Notes 2

Lecture 17: Cook-Levin Theorem, NP-Complete Problems

CS154, Lecture 15: Cook-Levin Theorem SAT, 3SAT

CS151 Complexity Theory. Lecture 1 April 3, 2017

CS154, Lecture 17: conp, Oracles again, Space Complexity

Pseudorandom Generators

Theory of Computation

Notes for Lecture Decision Diffie Hellman and Quadratic Residues

Complexity (Pre Lecture)

The Cook-Levin Theorem

Pseudorandom Generators

an efficient procedure for the decision problem. We illustrate this phenomenon for the Satisfiability problem.

Lecture 3: Randomness in Computation

Boolean circuits. Lecture Definitions

U.C. Berkeley CS278: Computational Complexity Professor Luca Trevisan 9/6/2004. Notes for Lecture 3

198:538 Complexity of Computation Lecture 16 Rutgers University, Spring March 2007

Lecture 4. 1 Circuit Complexity. Notes on Complexity Theory: Fall 2005 Last updated: September, Jonathan Katz

Polynomial Time Computation. Topics in Logic and Complexity Handout 2. Nondeterministic Polynomial Time. Succinct Certificates.

Time Complexity. CS60001: Foundations of Computing Science

198:538 Lecture Given on February 23rd, 1998

Lecture 8: Alternatation. 1 Alternate Characterizations of the Polynomial Hierarchy

Notes on Space-Bounded Complexity

conp, Oracles, Space Complexity

Pseudorandom Generators

CSE 105 THEORY OF COMPUTATION

1 Circuit Complexity. CS 6743 Lecture 15 1 Fall Definitions

Notes for Lecture 2. Statement of the PCP Theorem and Constraint Satisfaction

Quantum Computing Lecture 8. Quantum Automata and Complexity

Theory of Computation Time Complexity

Notes on Space-Bounded Complexity

Lecture 3: Nondeterminism, NP, and NP-completeness

Q = Set of states, IE661: Scheduling Theory (Fall 2003) Primer to Complexity Theory Satyaki Ghosh Dastidar

On Pseudorandomness w.r.t Deterministic Observers

Turing Machines A Turing Machine is a 7-tuple, (Q, Σ, Γ, δ, q0, qaccept, qreject), where Q, Σ, Γ are all finite

U.C. Berkeley CS278: Computational Complexity Professor Luca Trevisan 1/29/2002. Notes for Lecture 3

CS154, Lecture 13: P vs NP

CS151 Complexity Theory

6.841/18.405J: Advanced Complexity Wednesday, February 12, Lecture Lecture 3

1 From previous lectures

CSE 105 Theory of Computation

Intro to Theory of Computation

MTAT Complexity Theory October 13th-14th, Lecture 6

Lecture 10: Boolean Circuits (cont.)

CS154, Lecture 13: P vs NP

Lecture 4 : Quest for Structure in Counting Problems

Lecture 16: Time Complexity and P vs NP

BARRIERS IN COMPLEXITY THEORY

More Turing Machines. CS154 Chris Pollett Mar 15, 2006.

Lecture 22: Counting

6.895 Randomness and Computation March 19, Lecture Last Lecture: Boosting Weak Learners Into Strong Learners

Finish K-Complexity, Start Time Complexity

Advanced topic: Space complexity

Lecture 3: Reductions and Completeness

CS3719 Theory of Computation and Algorithms

Pseudorandom Generators

Randomized Complexity Classes; RP

Definition: conp = { L L NP } What does a conp computation look like?

CS278: Computational Complexity Spring Luca Trevisan

Questions Pool. Amnon Ta-Shma and Dean Doron. January 2, Make sure you know how to solve. Do not submit.

Lecture 1: Course Overview and Turing machine complexity

where Q is a finite set of states

CS6901: review of Theory of Computation and Algorithms

Notes on State Minimization

CSC 5170: Theory of Computational Complexity Lecture 9 The Chinese University of Hong Kong 15 March 2010

1 Randomized Computation

Chapter 1 - Time and Space Complexity. deterministic and non-deterministic Turing machine time and space complexity classes P, NP, PSPACE, NPSPACE

Turing Machines and Time Complexity

P and NP. Or, how to make $1,000,000.

COMPLEXITY THEORY. PSPACE = SPACE(n k ) k N. NPSPACE = NSPACE(n k ) 10/30/2012. Space Complexity: Savitch's Theorem and PSPACE- Completeness

CSE 200 Lecture Notes Turing machine vs. RAM machine vs. circuits

Computability Theory. CS215, Lecture 6,

U.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 2 Luca Trevisan August 29, 2017

Lecture 25: Cook s Theorem (1997) Steven Skiena. skiena

CMPT 710/407 - Complexity Theory Lecture 4: Complexity Classes, Completeness, Linear Speedup, and Hierarchy Theorems

Complexity. Complexity Theory Lecture 3. Decidability and Complexity. Complexity Classes

Notes for Lecture 15

Theory of Computation

In complexity theory, algorithms and problems are classified by the growth order of computation time as a function of instance size.

Computational Models Lecture 11, Spring 2009

Lecture Examples of problems which have randomized algorithms

Models of Computation

CSCE 551 Final Exam, Spring 2004 Answer Key

Complete problems for classes in PH, The Polynomial-Time Hierarchy (PH) oracle is like a subroutine, or function in

TIME COMPLEXITY AND POLYNOMIAL TIME; NON DETERMINISTIC TURING MACHINES AND NP. THURSDAY Mar 20

Lecture 2: From Classical to Quantum Model of Computation

Theory of Computation Lecture Notes. Problems and Algorithms. Class Information

15.1 Proof of the Cook-Levin Theorem: SAT is NP-complete

Time to learn about NP-completeness!

Comparison of several polynomial and exponential time complexity functions. Size n

UC Berkeley CS 170: Efficient Algorithms and Intractable Problems Handout 22 Lecturer: David Wagner April 24, Notes 22 for CS 170

COS598D Lecture 3 Pseudorandom generators from one-way functions

The purpose here is to classify computational problems according to their complexity. For that purpose we need first to agree on a computational

ITCS:CCT09 : Computational Complexity Theory Apr 8, Lecture 7

Transcription:

UC Berkeley Handout N5 CS94: Pseudorandomness and Combinatorial Constructions September 3, 005 Professor Luca Trevisan Scribe: Gatis Midrijanis Notes for Lecture 5 In the few lectures we are going to look at generators that are good against circuits of feasible size The constructions will be conditional by using complexity theoretical assumptions Circuits A circuit C has n inputs, m outputs, and is constructed with gates, OR gates and NOT gates Each gate has fan-in except NOT gate which has fan-in The fan-out can be any number One can look at a circuit like a graph without cycles See Figure for an example x x x 3 x 4 x n OR NOT z z z m Figure : A Boolean circuit A circuit C with n inputs gates and m output gates computes a function f : {0, } n {0, } m if we fix an ordering of the input and output gates and the computation goes in intuitive way The size of of a circuit C is defined as the number of and OR gates in it By convention, we do not count the NOT gates (Counting such gates would increase the size of circuit at most by factor of ) If m = then circuit C computes a predicate It is well known that any reasonable change in this definition is polynomially related with this one; if fan-in is bounded then even linearly related Unlike uniform complexity measures, like time and space, for which there are languages of arbitrarily high complexity, the size complexity of a problem is always at most exponential Theorem For every function f : {0, } n {0, } there is a circuit of size O( n ) that computes f

Proof: We use the identity f(x,, x n ) = (x f(, x,, x n )) (x f(0, x,, x n )) to recursively construct a circuit for f, as shown in Figure The recurrence relation for the size of the circuit is: s(n) = 3 + s(n ) with base case s() =, which solves to s(n) = n 3 = O( n ) x x x n f( x x n ) f(0 x x n ) NOT OR Figure : A circuit computing any function f(x, x,, x n ) assuming circuits for functions f(, x,, x n ) and f(0, x,, x n ) This bound is almost tight to the best possible O( n /n) bound Another important fact is that everything that is computable with Turing machine is computable with a circuit similar complexity Theorem If M is a Turing machine st on inputs on length n it always halts in t(n) steps (and accepts or rejects) Then for every n there is a circuit of size at most c M (t(n)) with n inputs that simulates M on inputs on length n for some constant c M Proof: Let Σ denote the alphabet of tape and Q denote the set of the s t(n) t(n) tableau of the computation of M(x) See Figure 3 Consider the Let k Σ ( Q +) denote the number of bits necessary to encode each entry of the tableau Each entry depends on three entries up By Theorem, the transition function f : {0, } 3k {0, } k used by M can be implemented by circuit of size k O( 3k ), which is constant in n Therefore the total size of a circuit that simulates complete tableau is O((k 3k t(n)) ) = O((t(n)) ) This is shown in Figure 4 The best known result of this kind gives a c M t(n) (log t(n)) upper bound

tape position x q 0 x x 3 x 4 x n time a b c q d e Figure 3: t(n) t(n) tableau of the computation The left entry of each cell is the tape symbol at that position and time The right entry is the machine or a blank symbol, depending on the position of the machine head x x x 3 x n q 0 k bits k bits k bits circuit k bits check for accepting Figure 4: The circuit to simulate a Turing machine M computation by constructing the tableau 3

Pseudorandom Generators Definition 3 (Indistinguishability - finite definition) Two random variables X and Y distributed over {0, } n are called (S, ɛ)-indistinguishable if for every circuit C of size at most S, Pr x X [C(x) = ] Pr y Y [C(y) = ] ɛ () Definition 4 (Pseudorandomness - finite definition) A random variable X is called (S, ɛ)- pseudorandom if it is (S, ɛ)-indistinguishable from U n, the uniform distribution over {0, } n This definition is much simpler than the definition using Turing machines because for TM we have to fix a universal TM to give a finite definition We can also give an asymptotic definition of pseudorandom generator as follows Definition 5 (BMY-type pseudorandom generator - asymptotic definition) A function G : {0, } n {0, } l(n), l(n) > n is called a BMY-type (Blum-Micali-Yao) pseudorandom generator with stretch l(n) if G on input of length n has output of length l(n) > n G is computable in polynomial time For every polynomials p, q and for every sufficiently large n, G(U n ) is (p(n), q(n) )-pseudorandom For example, lets take l(n) = n + and G : {0, } n {0, } n+ Let a circuit C be of size at most polynomial p(n) with n + inputs If C is BMY-type pseudorandom generator then Pr = ] x {0,} n[c(g(x)) Pr = ] r {0,} n+[c(r) q(n), () for any polynomial q(n) and large enough n It is easy to see that if P = NP (or even if NP is solvable with polynomial size circuits) then no BMY-type generators exist Since G(x) can be simulated by a Turing machine in a polynomial time, given output y we can just non-deterministically guess x and check whether G(x) = y By Theorem this algorithm can be simulated by a polynomial size circuit 3 One-Way Functions The main result of this and the lecture is the following Theorem 6 If one-way permutations exist then pseudorandom generators exist Definition 7 (One-Way Permutation - finite definition) A bijective function f : {0, } n {0, } n is a (S, ɛ)-one-way permutation if for every circuit C of size at most S, Pr = x] ɛ (3) x {0,} n[c(f(x)) 4

Definition 8 (One-Way Permutation - asymptotic definition) Call f = {f n } n one-way, where f n : {0, } n {0, } n is bijective function computable in a polynomial time, if for every polynomials p and q and for every n large enough f n is (p(n), q(n) )-one-way For example, let g : N N be a function g(x) := x 3 mod N, where N = pq for randomly chosen prime numbers p and q such that 3 does not divide (p )(q ) Strictly speaking, g cannot be one-way permutation by previous definition even when we represent natural numbers by bit strings, but there are several ways how to extend the definition to include such functions First, we can allow more structured input and output of one-way function, in this case, by allowing to give prime numbers p and q in the input Second, we could speak about probability distribution over families of functions {f} Both of those changes in the definition don t change the results in this lecture Definition 9 (Hard-Core predicate - finite definition) A function B : {0, } n {0, } is a (S, ɛ)-hard-core predicate for a permutation f : {0, } n {0, } n if for every circuit C of size at most S, Pr x {0,} n[c(f(x)) = B(x)] + ɛ (4) Definition 0 (Hard-Core Predicate - asymptotic definition) B = {B n } n is hard-core predicate for family of permutations f = {f n } n, where B n : {0, } n {0, } and f n : {0, } n {0, } n are polynomial time computable functions, if for every polynomial p and q and for every n large enough, B n is (p(n), q(n) )-hard-core for f n For above-mentioned function g, an equivalent of hard-core predicate is x mod It means that to distinguish (x mod ) from random bit is as hard as to compute x given only g(x) Exercise: Prove that if B is hard-core predicate for f then f is one-way permutation Hint: notice that if f wouldn t be one-way then we could compute x from f(x) and then B(x) in polynomial time In the lecture we will prove the following two important theorems Theorem (Goldreich-Levin) Let f be a one-way permutation and g such that g n (x, r) = f(n), r Define B n (x, r) = x r = i x ir i mod Then g is one-way permutation and B is a hard-core predicate for g Theorem (Blum-Micali-Yao) If B is a hard-core predicate for f then x f(x), B(x) is a BMY-type pseudorandom generator 5