Randomized Complexity Classes; RP

Similar documents
Circuit Complexity. Circuit complexity is based on boolean circuits instead of Turing machines.

Randomness and non-uniformity

6.045: Automata, Computability, and Complexity (GITCS) Class 17 Nancy Lynch

Magic 3/4? The number 3/4 bounds the probability (ratio) of a right answer away from 1/2.

Notes for Lecture 3... x 4

Lecture Examples of problems which have randomized algorithms

Randomized Computation

Chapter 5 : Randomized computation

1 Randomized Computation

20.1 2SAT. CS125 Lecture 20 Fall 2016

Umans Complexity Theory Lectures

1 Randomized complexity

Lecture 24: Randomized Complexity, Course Summary

satisfiability (sat) Satisfiability unsatisfiability (unsat or sat complement) and validity Any Expression φ Can Be Converted into CNFs and DNFs

Notes for Lecture Notes 2

Randomness and non-uniformity

The knapsack Problem

Review of Complexity Theory

conp, Oracles, Space Complexity

1 Computational Problems

Lecture 12: Randomness Continued

Lecture 2: From Classical to Quantum Model of Computation

Space Complexity. then the space required by M on input x is. w i u i. F ) on. September 27, i=1

U.C. Berkeley CS278: Computational Complexity Professor Luca Trevisan 1/29/2002. Notes for Lecture 3

Definition: conp = { L L NP } What does a conp computation look like?

Bipartite Perfect Matching

How To Test If a Polynomial Is Identically Zero?

Pr[X = s Y = t] = Pr[X = s] Pr[Y = t]

-bit integers are all in ThC. Th The following problems are complete for PSPACE NPSPACE ATIME QSAT, GEOGRAPHY, SUCCINCT REACH.

Notes for Lecture 3... x 4

Theory of Computation Lecture Notes. Problems and Algorithms. Class Information

CSC 5170: Theory of Computational Complexity Lecture 9 The Chinese University of Hong Kong 15 March 2010

CSC 5170: Theory of Computational Complexity Lecture 5 The Chinese University of Hong Kong 8 February 2010

Lecture 2 Sep 5, 2017

Computability and Complexity CISC462, Fall 2018, Space complexity 1

fsat We next show that if sat P, then fsat has a polynomial-time algorithm. c 2010 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 425

De Morgan s a Laws. De Morgan s laws say that. (φ 1 φ 2 ) = φ 1 φ 2, (φ 1 φ 2 ) = φ 1 φ 2.

Lecture 23: Introduction to Quantum Complexity Theory 1 REVIEW: CLASSICAL COMPLEXITY THEORY

Computer Sciences Department

HW11. Due: December 6, 2018

CS Communication Complexity: Applications and New Directions

3 Probabilistic Turing Machines

The purpose here is to classify computational problems according to their complexity. For that purpose we need first to agree on a computational

CS154, Lecture 17: conp, Oracles again, Space Complexity

CS294: Pseudorandomness and Combinatorial Constructions September 13, Notes for Lecture 5

Adleman Theorem and Sipser Gács Lautemann Theorem. CS422 Computation Theory CS422 Complexity Theory

conp = { L L NP } (1) This problem is essentially the same as SAT because a formula is not satisfiable if and only if its negation is a tautology.

Lecture 20: conp and Friends, Oracles in Complexity Theory

Lecture 1: January 16

DRAFT. Diagonalization. Chapter 4

A An Overview of Complexity Theory for the Algorithm Designer

Answers to the CSCE 551 Final Exam, April 30, 2008

CS 151 Complexity Theory Spring Solution Set 5

Computational Complexity of Bayesian Networks

Lecture 17. In this lecture, we will continue our discussion on randomization.

Lecture 22: Quantum computational complexity

The running time of Euclid s algorithm

CSCI 1590 Intro to Computational Complexity

an efficient procedure for the decision problem. We illustrate this phenomenon for the Satisfiability problem.

The Complexity Classes NP-Hard, BPP.

Complexity Theory VU , SS The Polynomial Hierarchy. Reinhard Pichler

A polytime proof of correctness of the Rabin-Miller algorithm from Fermat s Little Theorem

Theory of Computation Lecture Notes

Lecture notes on OPP algorithms [Preliminary Draft]

Outline. Complexity Theory EXACT TSP. The Class DP. Definition. Problem EXACT TSP. Complexity of EXACT TSP. Proposition VU 181.

Handout 5. α a1 a n. }, where. xi if a i = 1 1 if a i = 0.

Notes on Complexity Theory Last updated: November, Lecture 10

: On the P vs. BPP problem. 30/12/2016 Lecture 11

Randomized Algorithms

Instructor: Bobby Kleinberg Lecture Notes, 25 April The Miller-Rabin Randomized Primality Test

Space Complexity. The space complexity of a program is how much memory it uses.

6.045 Final Exam Solutions

Lecture 12: Interactive Proofs

Chapter 6 Randomization Algorithm Theory WS 2012/13 Fabian Kuhn

Lecture 23: Alternation vs. Counting

6-1 Computational Complexity

U.C. Berkeley CS278: Computational Complexity Professor Luca Trevisan August 30, Notes for Lecture 1

Lecture 4 : Quest for Structure in Counting Problems

MTAT Complexity Theory October 13th-14th, Lecture 6

PROBABILISTIC COMPUTATION. By Remanth Dabbati

POLYNOMIAL SPACE QSAT. Games. Polynomial space cont d

Sparse analysis Lecture II: Hardness results for sparse approximation problems

Computational Complexity: A Modern Approach. Draft of a book: Dated January 2007 Comments welcome!

CSE 555 HW 5 SAMPLE SOLUTION. Question 1.

Length-Increasing Reductions for PSPACE-Completeness

Turing-Computable Functions

Exam Computability and Complexity

Computational Complexity Theory

: Computational Complexity Lecture 3 ITCS, Tsinghua Univesity, Fall October 2007

Model Counting for Logical Theories

CS 125 Section #10 (Un)decidability and Probability November 1, 2016

Theoretical Computer Science 2

Lecture 23: More PSPACE-Complete, Randomized Complexity

Theory of Computation

6.895 Randomness and Computation March 19, Lecture Last Lecture: Boosting Weak Learners Into Strong Learners

Probabilistic Autoreductions

Time and space classes

Lecture 19: Finish NP-Completeness, conp and Friends

By allowing randomization in the verification process, we obtain a class known as MA.

Week 2: Defining Computation

Transcription:

Randomized Complexity Classes; RP Let N be a polynomial-time precise NTM that runs in time p(n) and has 2 nondeterministic choices at each step. N is a polynomial Monte Carlo Turing machine for a language L if the following conditions hold: If x L, then at least half of the 2 p(n) computation paths of N on x halt with yes where n = x. If x L, then all computation paths halt with no. The class of all languages with polynomial Monte Carlo TMs is denoted RP (randomized polynomial time). a a Adleman and Manders (1977). c 2010 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 512

Comments on RP Nondeterministic steps can be seen as fair coin flips. There are no false positive answers. The probability of false negatives, 1 ɛ, is at most 0.5. But any constant between 0 and 1 can replace 0.5. By repeating the algorithm k = 1 log 2 1 ɛ times, the probability of false negatives becomes (1 ɛ) k 0.5. In fact, ɛ can be arbitrarily close to 0 as long as it is of the order 1/q(n) for some polynomial q(n). 1 log 2 1 ɛ = O( 1 ɛ ) = O(q(n)). c 2010 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 513

P RP NP. Where RP Fits A deterministic TM is like a Monte Carlo TM except that all the coin flips are ignored. A Monte Carlo TM is an NTM with extra demands on the number of accepting paths. compositeness RP; primes corp; primes RP. a In fact, primes P. b RP corp is an alternative plausible notion of efficient computation. a Adleman and Huang (1987). b Agrawal, Kayal, and Saxena (2002). c 2010 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 514

ZPP a (Zero Probabilistic Polynomial) The class ZPP is defined as RP corp. A language in ZPP has two Monte Carlo algorithms, one with no false positives and the other with no false negatives. If we repeatedly run both Monte Carlo algorithms, eventually one definite answer will come (unlike RP). A positive answer from the one without false positives. A negative answer from the one without false negatives. a Gill (1977). c 2010 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 515

The ZPP Algorithm (Las Vegas) 1: {Suppose L ZPP.} 2: {N 1 has no false positives, and N 2 has no false negatives.} 3: while true do 4: if N 1 (x) = yes then 5: return yes ; 6: end if 7: if N 2 (x) = no then 8: return no ; 9: end if 10: end while c 2010 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 516

ZPP (concluded) The expected running time for the correct answer to emerge is polynomial. The probability that a run of the 2 algorithms does not generate a definite answer is 0.5 (why?). Let p(n) be the running time of each run of the while-loop. The expected running time for a definite answer is 0.5 i ip(n) = 2p(n). i=1 Essentially, ZPP is the class of problems that can be solved without errors in expected polynomial time. c 2010 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 517

Large Deviations Suppose you have a biased coin. One side has probability 0.5 + ɛ to appear and the other 0.5 ɛ, for some 0 < ɛ < 0.5. But you do not know which is which. How to decide which side is the more likely side with high confidence? Answer: Flip the coin many times and pick the side that appeared the most times. Question: Can you quantify the confidence? c 2010 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 518

The Chernoff Bound a Theorem 69 (Chernoff (1952)) Suppose x 1, x 2,..., x n are independent random variables taking the values 1 and 0 with probabilities p and 1 p, respectively. Let X = n i=1 x i. Then for all 0 θ 1, prob[ X (1 + θ) pn ] e θ2 pn/3. The probability that the deviate of a binomial random variable from its expected value n E[ X ] = E[ x i ] = pn i=1 decreases exponentially with the deviation. a Herman Chernoff (1923 ). optimal. The Chernoff bound is asymptotically c 2010 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 519

The Proof Let t be any positive real number. Then prob[ X (1 + θ) pn ] = prob[ e tx e t(1+θ) pn ]. Markov s inequality (p. 460) generalized to real-valued random variables says that prob [ e tx ke[ e tx ] ] 1/k. With k = e t(1+θ) pn /E[ e tx ], we have prob[ X (1 + θ) pn ] e t(1+θ) pn E[ e tx ]. c 2010 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 520

The Proof (continued) Because X = n i=1 x i and x i s are independent, E[ e tx ] = (E[ e tx 1 ]) n = [ 1 + p(e t 1) ] n. Substituting, we obtain prob[ X (1 + θ) pn ] e t(1+θ) pn [ 1 + p(e t 1) ] n as (1 + a) n e an for all a > 0. e t(1+θ) pn e pn(et 1) c 2010 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 521

The Proof (concluded) With the choice of t = ln(1 + θ), the above becomes prob[ X (1 + θ) pn ] e pn[ θ (1+θ) ln(1+θ) ]. The exponent expands to θ2 2 + θ3 6 θ4 12 + for 0 θ 1, which is less than θ2 2 + θ3 6 θ2 ( 1 2 + θ 6 ) θ 2 ( 1 2 + 1 6 ) = θ2 3. c 2010 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 522

Power of the Majority Rule From prob[ X (1 θ) pn ] e θ2 2 pn (prove it): Corollary 70 If p = (1/2) + ɛ for some 0 ɛ 1/2, then [ n ] prob x i n/2 e ɛ2n/2. i=1 The textbook s corollary to Lemma 11.9 seems incorrect. Our original problem (p. 518) hence demands 1.4k/ɛ 2 independent coin flips to guarantee making an error with probability at most 2 k with the majority rule. c 2010 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 523

BPP a (Bounded Probabilistic Polynomial) The class BPP contains all languages for which there is a precise polynomial-time NTM N such that: If x L, then at least 3/4 of the computation paths of N on x lead to yes. If x L, then at least 3/4 of the computation paths of N on x lead to no. N accepts or rejects by a clear majority. a Gill (1977). c 2010 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 524

Magic 3/4? The number 3/4 bounds the probability of a right answer away from 1/2. Any constant strictly between 1/2 and 1 can be used without affecting the class BPP. In fact, 0.5 plus any inverse polynomial between 1/2 and 1, 0.5 + 1 p(n), can be used. c 2010 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 525

The Majority Vote Algorithm Suppose L is decided by N by majority (1/2) + ɛ. 1: for i = 1, 2,..., 2k + 1 do 2: Run N on input x; 3: end for 4: if yes is the majority answer then 5: yes ; 6: else 7: no ; 8: end if c 2010 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 526

Analysis The running time remains polynomial, being 2k + 1 times N s running time. By Corollary 70 (p. 523), the probability of a false answer is at most e ɛ2k. By taking k = 2/ɛ 2, the error probability is at most 1/4. As with the RP case, ɛ can be any inverse polynomial, because k remains polynomial in n. c 2010 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 527

Probability Amplification for BPP Let m be the number of random bits used by a BPP algorithm. By definition, m is polynomial in n. With k = Θ(log m) in the majority vote algorithm, we can lower the error probability to, say, (3m) 1. c 2010 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 528

Aspects of BPP BPP is the most comprehensive yet plausible notion of efficient computation. If a problem is in BPP, we take it to mean that the problem can be solved efficiently. In this aspect, BPP has effectively replaced P. (RP corp) (NP conp). (RP corp) BPP. Whether BPP (NP conp) is unknown. But it is unlikely that NP BPP (p. 544). c 2010 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 529

cobpp The definition of BPP is symmetric: acceptance by clear majority and rejection by clear majority. An algorithm for L BPP becomes one for L by reversing the answer. So L BPP and BPP cobpp. Similarly cobpp BPP. Hence BPP = cobpp. This approach does not work for RP. It did not work for NP either. c 2010 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 530

BPP and cobpp Ø \HV Ù Ø QR Ù Ø QR Ù Ø \HV Ù c 2010 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 531

The Good, the Bad, and the Ugly conp NP ZPP corp P RP BPP c 2010 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 532

Circuit Complexity Circuit complexity is based on boolean circuits instead of Turing machines. A boolean circuit with n inputs computes a boolean function of n variables. By identifying true/1 with yes and false/0 with no, a boolean circuit with n inputs accepts certain strings in { 0, 1 } n. To relate circuits with arbitrary languages, we need one circuit for each possible input length n. c 2010 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 533

Formal Definitions The size of a circuit is the number of gates in it. A family of circuits is an infinite sequence C = (C 0, C 1,...) of boolean circuits, where C n has n boolean inputs. For input x {0, 1}, C x outputs 1 if and only if x L. C n accepts L {0, 1} n. L {0, 1} has polynomial circuits if there is a family of circuits C such that: The size of C n is at most p(n) for some fixed polynomial p. C n accepts L {0, 1} n. c 2010 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 534

Exponential Circuits Contain All Languages Theorem 14 (p. 171) implies that there are languages that cannot be solved by circuits of size 2 n /(2n). But exponential circuits can solve all problems. Proposition 71 All decision problems (decidable or otherwise) can be solved by a circuit of size 2 n+2. We will show that for any language L {0, 1}, L {0, 1} n can be decided by a circuit of size 2 n+2. c 2010 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 535

The Proof (concluded) Define boolean function f : {0, 1} n {0, 1}, where 1 x 1 x 2 x n L, f(x 1 x 2 x n ) = 0 x 1 x 2 x n L. f(x 1 x 2 x n ) = (x 1 f(1x 2 x n )) ( x 1 f(0x 2 x n )). The circuit size s(n) for f(x 1 x 2 x n ) hence satisfies with s(1) = 1. s(n) = 4 + 2s(n 1) Solve it to obtain s(n) = 5 2 n 1 4 2 n+2. c 2010 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 536

The Circuit Complexity of P Proposition 72 All languages in P have polynomial circuits. Let L P be decided by a TM in time p(n). By Corollary 31 (p. 261), there is a circuit with O(p(n) 2 ) gates that accepts L {0, 1} n. The size of the circuit depends only on L and the length of the input. The size of the circuit is polynomial in n. c 2010 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 537

Polynomial Circuits vs. P Is the converse of Proposition 72 true? Do polynomial circuits accept only languages in P? They can accept undecidable languages! c 2010 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 538

Languages That Polynomial Circuits Accept (concluded) Let L {0, 1} be an undecidable language. Let U = {1 n : the binary expansion of n is in L}. a For example, 11111 1 U if 101 2 L. U is also undecidable. U {1} n can be accepted by the trivial circuit C n that outputs 1 if 1 n U and outputs 0 if 1 n U. We may not know which is the case for general n. The family of circuits (C 0, C 1,...) is polynomial in size. a Assume n s leading bit is always 1 without loss of generality. c 2010 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 539

A Patch Despite the simplicity of a circuit, the previous discussions imply the following: Circuits are not a realistic model of computation. Polynomial circuits are not a plausible notion of efficient computation. What is missing? The effective and efficient constructibility of C 0, C 1,.... c 2010 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 540

Uniformity A family (C 0, C 1,...) of circuits is uniform if there is a log n-space bounded TM which on input 1 n outputs C n. Note that n is the length of the input to C n. Circuits now cannot accept undecidable languages (why?). The circuit family on p. 539 is not constructible by a single Turing machine (algorithm). A language has uniformly polynomial circuits if there is a uniform family of polynomial circuits that decide it. c 2010 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 541

Uniformly Polynomial Circuits and P Theorem 73 L P if and only if L has uniformly polynomial circuits. One direction was proved in Proposition 72 (p. 537). Now suppose L has uniformly polynomial circuits. Decide x L in polynomial time as follows: Calculate n = x. Generate C n in log n space, hence polynomial time. Evaluate the circuit with input x in polynomial time. Therefore L P. c 2010 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 542

Relation to P vs. NP Theorem 73 implies that P NP if and only if NP-complete problems have no uniformly polynomial circuits. A stronger conjecture: NP-complete problems have no polynomial circuits, uniformly or not. The above is currently the preferred approach to proving the P NP conjecture without success so far. c 2010 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 543

BPP s Circuit Complexity Theorem 74 (Adleman (1978)) All languages in BPP have polynomial circuits. Our proof will be nonconstructive in that only the existence of the desired circuits is shown. Recall our proof of Theorem 14 (p. 171). Something exists if its probability of existence is nonzero. It is not known how to efficiently generate circuit C n. If the construction of C n can be made efficient, then P = BPP, an unlikely result. c 2010 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 544

The Proof Let L BPP be decided by a precise NTM N by clear majority. We shall prove that L has polynomial circuits C 0, C 1,.... Suppose N runs in time p(n), where p(n) is a polynomial. Let A n = {a 1, a 2,..., a m }, where a i {0, 1} p(n). Pick m = 12(n + 1). Each a i A n represents a sequence of nondeterministic choices (i.e., a computation path) for N. c 2010 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 545

The Proof (continued) Let x be an input with x = n. Circuit C n simulates N on x with each sequence of choices in A n and then takes the majority of the m outcomes. a Because N with a i is a polynomial-time TM, it can be simulated by polynomial circuits of size O(p(n) 2 ). See the proof of Proposition 72 (p. 537). The size of C n is therefore O(mp(n) 2 ) = O(np(n) 2 ). This is a polynomial. a As m is even, there may be no clear majority. Still, the probability of that happening is very small and does not materially affect our general conclusion. Thanks to a lively class discussion on December 14, 2010. c 2010 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 546

The Circuit D D D D P 0DMRULW\ORJLF c 2010 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 547

The Proof (continued) We now prove the existence of an A n making C n correct on all n-bit inputs. Call a i bad if it leads N to a false positive or a false negative. Select A n uniformly randomly. For each x {0, 1} n, 1/4 of the computations of N are erroneous. Because the sequences in A n are chosen randomly and independently, the expected number of bad a i s is m/4. c 2010 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 548

The Proof (continued) By the Chernoff bound (p. 519), the probability that the number of bad a i s is m/2 or more is at most e m/12 < 2 (n+1). The error probability is < 2 (n+1) for each x {0, 1} n. The probability that there is an x such that A n results in an incorrect answer is < 2 n 2 (n+1) = 2 1. prob[ A B ] prob[ A ] + prob[ B ] +. Note that each A n yields a circuit. We just showed that at least half of them are correct. c 2010 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 549

The Proof (concluded) So with probability 0.5, a random A n produces a correct C n for all inputs of length n. Because this probability exceeds 0, an A n that makes majority vote work for all inputs of length n exists. Hence a correct C n exists. a We have used the probabilistic method. a Quine (1948), To be is to be the value of a bound variable. c 2010 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 550