: On the P vs. BPP problem. 30/12/2016 Lecture 11

Similar documents
: On the P vs. BPP problem. 30/12/2016 Lecture 12

1 Randomized complexity

CSC 5170: Theory of Computational Complexity Lecture 9 The Chinese University of Hong Kong 15 March 2010

Randomness and non-uniformity

Notes on Complexity Theory Last updated: November, Lecture 10

Questions Pool. Amnon Ta-Shma and Dean Doron. January 2, Make sure you know how to solve. Do not submit.

: Computational Complexity Lecture 3 ITCS, Tsinghua Univesity, Fall October 2007

CS 151 Complexity Theory Spring Solution Set 5

1 Randomized Computation

Umans Complexity Theory Lectures

CSC 5170: Theory of Computational Complexity Lecture 5 The Chinese University of Hong Kong 8 February 2010

Umans Complexity Theory Lectures

Randomized Computation

Lecture 24: Randomized Complexity, Course Summary

Lecture 3: Reductions and Completeness

Some Results on Circuit Lower Bounds and Derandomization of Arthur-Merlin Problems

Handout 5. α a1 a n. }, where. xi if a i = 1 1 if a i = 0.

Lecture 7: Polynomial time hierarchy

1 Introduction The relation between randomized computations with one-sided error and randomized computations with two-sided error is one of the most i

Lecture 20: conp and Friends, Oracles in Complexity Theory

Notes for Lecture 3... x 4

IS VALIANT VAZIRANI S ISOLATION PROBABILITY IMPROVABLE? Holger Dell, Valentine Kabanets, Dieter van Melkebeek, and Osamu Watanabe December 31, 2012

NP Completeness. Richard Karp, 1972.

Another proof that BPP PH (and more)

The Polynomial Hierarchy

: On the P vs. BPP problem. 18/12/16 Lecture 10

Randomness and non-uniformity

Exponential time vs probabilistic polynomial time

Technische Universität München Summer term 2010 Theoretische Informatik August 2, 2010 Dr. J. Kreiker / Dr. M. Luttenberger, J. Kretinsky SOLUTION

Complete problems for classes in PH, The Polynomial-Time Hierarchy (PH) oracle is like a subroutine, or function in

Umans Complexity Theory Lectures

Uniform Derandomization

20.1 2SAT. CS125 Lecture 20 Fall 2016

Length-Increasing Reductions for PSPACE-Completeness

2 P vs. NP and Diagonalization

15-855: Intensive Intro to Complexity Theory Spring Lecture 7: The Permanent, Toda s Theorem, XXX

Lecture 12: Randomness Continued

CS21 Decidability and Tractability

Lecture 23: Alternation vs. Counting

conp, Oracles, Space Complexity

Complexity Theory Final Exam

If NP languages are hard on the worst-case then it is easy to find their hard instances

Two Comments on Targeted Canonical Derandomizers

Lecture 19: Finish NP-Completeness, conp and Friends

CS Communication Complexity: Applications and New Directions

Lecture 26. Daniel Apon

Lecture 22: Counting

Introduction to Complexity Theory. Bernhard Häupler. May 2, 2006

In Search of an Easy Witness: Exponential Time vs. Probabilistic Polynomial Time

Lecture 3: Nondeterminism, NP, and NP-completeness

P = k T IME(n k ) Now, do all decidable languages belong to P? Let s consider a couple of languages:

Is Valiant Vazirani s Isolation Probability Improvable?

Identifying an Honest EXP NP Oracle Among Many

Probabilistic Autoreductions

Lecture 6: Oracle TMs, Diagonalization Limits, Space Complexity

A Note on the Karp-Lipton Collapse for the Exponential Hierarchy

Lecture 26: Arthur-Merlin Games

Randomized Complexity Classes; RP

Polynomial Identity Testing and Circuit Lower Bounds

A An Overview of Complexity Theory for the Algorithm Designer

CS154, Lecture 17: conp, Oracles again, Space Complexity

1 From previous lectures

Input-Oblivious Proof Systems and a Uniform Complexity Perspective on P/poly

Lecture 59 : Instance Compression and Succinct PCP s for NP

Review of Basic Computational Complexity

CS151 Complexity Theory. Lecture 13 May 15, 2017

A.Antonopoulos 18/1/2010

Worst-Case to Average-Case Reductions Revisited

CS151 Complexity Theory. Lecture 14 May 17, 2017

1 PSPACE-Completeness

Computational Complexity Theory

Theory of Computer Science to Msc Students, Spring Lecture 2

Definition: conp = { L L NP } What does a conp computation look like?

Lecture 9: Polynomial-Time Hierarchy, Time-Space Tradeoffs

Lecture 1: Introduction

Lecture 18: Oct 15, 2014

6.045 Final Exam Solutions

Lecture 17. In this lecture, we will continue our discussion on randomization.

2 Natural Proofs: a barrier for proving circuit lower bounds

U.C. Berkeley CS278: Computational Complexity Professor Luca Trevisan 1/29/2002. Notes for Lecture 3

ECE 695 Numerical Simulations Lecture 2: Computability and NPhardness. Prof. Peter Bermel January 11, 2017

MTAT Complexity Theory October 13th-14th, Lecture 6

Lecture 24: Approximate Counting

On Uniform Amplification of Hardness in NP

Lecture 5 Polynomial Identity Testing

Notes on Complexity Theory Last updated: October, Lecture 6

Non-Approximability Results (2 nd part) 1/19

Lecture 12: Interactive Proofs

ITCS:CCT09 : Computational Complexity Theory Apr 8, Lecture 7

Computational Complexity of Bayesian Networks

Computational Complexity: A Modern Approach

Limits to Approximability: When Algorithms Won't Help You. Note: Contents of today s lecture won t be on the exam

Lecture 19: Interactive Proofs and the PCP Theorem

IF NP LANGUAGES ARE HARD ON THE WORST-CASE, THEN IT IS EASY TO FIND THEIR HARD INSTANCES

Lecture 21: Algebraic Computation Models

DRAFT. Algebraic computation models. Chapter 14

Lecture 2: January 18

Computational Complexity

Lecture 3 (Notes) 1. The book Computational Complexity: A Modern Approach by Sanjeev Arora and Boaz Barak;

Pr[X = s Y = t] = Pr[X = s] Pr[Y = t]

Transcription:

03684155: On the P vs. BPP problem. 30/12/2016 Lecture 11 Promise problems Amnon Ta-Shma and Dean Doron 1 Definitions and examples In a promise problem, we are interested in solving a problem only on a subset of the inputs, which is a very natural phenomenon ( given a connected graph, determine whether or not... ). Formally: Definition 1. A promise problem Π is a pair (Π Y, Π N ) such that Π Y, Π N {0, 1} and Π Y Π N =. The set Π Y Π N is called the promise. Given a complexity class C, we denote the class Promise-C to denote the set of promise problems decidable via the same resources as problems in C. For example, Π = (Π Y, Π N ) Promise-BPP if there exists a probabilistic polynomial-time algorithm M such that: For every x Π Y it holds that Pr y [M(x, y) = 1] 2/3. For every x Π N it holds that Pr y [M(x, y) = 0] 2/3. Note that we do not impose any requirements on the output of M on x-s outside the promise (but do, however, require M to run in polynomial-time). Reductions also extend naturally to promise problems. Definition 2. The promise problem Π = (Π Y, Π N ) is (Karp) reducible to the probmise problem Π = (Π Y, Π N ) if there exists a polynomial-time computable function f such that: For every x Π Y it holds that f(x) Π Y. For every x Π N it holds that f(x) Π N. Definition 3. The promise problem Π = (Π Y, Π N ) is Cook-reducible to the probmise problem Π = (Π Y, Π N ) if there exists a polynomial-time oracle TM M such that: For every x Π Y it holds that M Π (x) = 1. For every x Π N it holds that M Π (x) = 0. Although it is unknown whether or not BPP has a complete problem, it follows immediately that the above problem is complete for Promise-BPP (under Karp reductions): The Yes instances are Boolean circuits that evaluate to 1 on at least 2/3 fraction of their inputs whereas the No instances are Boolean circuits that evaluate to 0 on at least 2/3 fraction of their inputs. A more notorious example that differentiates promise problems from their decision problems counterparts is the existence of a Promise-NP-hard problem (under Cook reductions) in Promise-NP Promise-coNP. An analogous result in the non-promise world would imply NP = conp. The promise problem PSAT is the following: 1

Yes instances: (ϕ 1, ϕ 2 ) such that ϕ 1 SAT and ϕ 2 / SAT. No instance: (ϕ 1, ϕ 2 ) such that ϕ 1 / SAT and ϕ 2 SAT. The following fact is easy: Claim 4. PSAT Promise-NP Promise-coNP. Furthermore: Claim 5. Promise-NP is Cook-reducible to PSAT. Proof. The reduction from SAT to PSAT is the following: On input ϕ with n variables and oracle access to PSAT, the TM M proceeds as follows: 1. For every i [n], (a) Let ϕ i,b be the formula obtained from ϕ by setting x i to b. (b) Invoke Π on (ϕ i,1, ϕ i,0 ). If the answer is Yes then set x i = 1 in ϕ and otherwise set x i = 0 in ϕ. 2. Accept iff ϕ = 1. If ϕ is not satisfiable then of course we will always reject. Note that at any stage, if ϕ is satisfiable and the query (ϕ i,0, ϕ i,1 ) is answered with b then ϕ(x i = b) is satisfiable. Thus, if ϕ is satisfiable we will end up with 1 and accept. How did we gain this power? The promiscuous way in which we applied the oracle calls does not maintain the standard meaning of the reduction we made queries that violate the promise, but still we have shown that the reduction remains valid regardless of the answers given to these violating queries. However, these queries fail to preserve the structural consequences we would expect. One may eliminate the problems arising from such queries by requiring that the reduction does not make them (we will see this in the exercise). Also, Karp reductions are such an example. 2 Another look at BPP Σ 2 2.1 BPP Σ 2 We show a different way of proving Sipser s theorem BPP Σ 2, via a randomness-efficient errorreduction, due to Goldreich and Zuckerman [1]. Recall that since BPP Σ 2, P = NP implies P = BPP. The obvious way to amplify the success probability is by repeated independent sampling. This way, we can transform a constant-error BPP machine that uses m coins to a one with 2 t error using O(mt) coins. Using extractors, we know how to do it using O(t) coins. What we should note is that the number of coins used is essentially the logarithm of the error bound. Specifically: 2

Theorem 6 ([2]). For every L BPP there exists a probabilistic TM M and a polynomial p such that Pr y {0,1} p( x ) [M(x, y) L(x)] < 2 2p( x )/3. The interested reader may refer to last year s course for the (non-tight) proof: http://www.cs. tau.ac.il/~amnon/classes/2016-prg/lecture2.pdf. Theorem 7. BPP Σ 2. Proof. Let L BPP, so there exists a polynomial-time TM M and a polynomial p such that if x L then Pr y [M(x, y) = 1] > 1 2 2p( x )/3 and if x / L then Pr y [M(x, y) = 1] < 2 2p( x )/3, where y {0, 1} p( x ). On input x, let m = p( x ). We know that if x L then there are at most 2 m/3 possible y-s for which M(x, y) = 0. Thus, there are at most 2 m/3 prefixes y {0, 1} m/2 for which some y exists so that M(x, y y ) = 0. If x / L then there are at most 2 m/3 possible y-s for which M(x, y) = 1, so for each y {0, 1} m/2, it holds that Pr [M(x, y y ) = 1] 2m/3 = o(1). y {0,1} m/2 2m/2 Consequently, if x L then and if x / L then y y M(x, y y ) = 1, y y M(x, y y ) = 0. Thus, L Σ 2. In fact, it shows an even stronger claim: we can replace the quantifier with a for almost all quantifier. What we did is essentially divided the set of 2 m possible coin tosses into 2 m/2 subsets of size 2 m/2. To handle the x L case, we used the fact that the number of bad coin tosses is smaller than the number of subsets. To handle the x / L case, we used the fact that the number of bad coin tosses is smaller than the size of each subset. Both facts are non-trivial, and are a consequence of the efficient error-reduction scheme. The above prove also holds for Promise-BPP. 2.2 P, Promise-RP and Promise-BPP As far as we know, it may be the case that RP = P, yet BPP P. In the promise world, this is not the case. Theorem 8. If Promise-RP = P then Promise-BPP = P. Here, the more standard BPP Σ 2 proof will be useful. A closer look at the proof reveals that is also shows: Theorem 9. Promise-BPP Promise-RP Promise-RP. From this, Theorem 8 easily follows: If Promise-RP = P then Promise-BPP P P = P. 3

Proof. (of Theorem 8). We prove the stronger claim, that BPP RP Promise-RP, even with one oracle call. Let L BPP, equipped with a TM M that on input of length n errs with probability at most 2 n and uses at most p(n) random coins for some fixed polynomial p. Define: B = { (x, y 1,..., y p( x ) ) w, M(x, w y 1 ) = 0... M(x, w y p( x ) ) = 0 }. Consider an RP B algorithm, that on input x B n, chooses y 1,..., y p(n) independently at random and accepts iff (x, y 1,..., y p(n) ) / B. If x L then for a fixed w and for every i, Pr[M(x, w y i ) = 0] 2 n. As the choices are independent, Pr[ i M(x, w y i ) = 0] 2 n p(n). By the union bound over all w-s, we have that Pr[x B] 2 n. Suppose now that x / L. Fix some y 1,..., y p(n) and i. If we choose w at random, then again, Pr[M(x, w y i ) = 1] 2 n. By the union bound, Pr[ i M(x, w y i ) = 1] p(n) 2 n 1/2. Thus, for every y 1,..., y p(n), there exists a good w and (x, y 1,..., y p(n) ) B. Thus, indeed, L RP B. The next step is to show that L RP Promise-RP. Let C B and we only know that for a tuple (x, y 1,..., y p(n) ): If more than half of the w-s satisfy M(x, w y 1 ) = 0... M(x, w y p( x ) ) = 0 then (x, y 1,..., y p(n) ) C. If there is not w-s satisfying M(x, w y 1 ) = 0... M(x, w y p( x ) ) = 0 then (x, y 1,..., y p(n) ) / C. The same proof that L RP B also shows that L RP C (verify this). What did we gain? B NP, however clearly, C Promise-RP and we are done. 3 EXP P/poly revisited We know that if EXP P/poly then we have a partial derandomization of BPP, namely BPP io-subexp. The same derandomization also holds for Pr BPP. This result readily holds for Promise-BPP, that is Promise-BPP io-subexp. However, it is not known whether the existence of hard Boolean functions in EXP is actually necessary for partial derandomization of BPP. On the other hand, any partial (even nondeterministic) derandomization of Promise-BPP yields a circuit lower bound for NEXP: Theorem 10. If Promise-BPP Promise-SUBEXP then NEXP P/poly. We will give this as an exercise. Thus, unconditional results in derandomization require either making a distinction between BPP and Promise-BPP, or proving circuit lower bounds for NEXP. Indeed, we know that NEXP P/poly implies MA = NEXP, and hence no derandomization of MA is possible unless there are hard functions in NEXP. Since derandomizing Promise-BPP also allows to derandomize MA, we conclude that no full derandomization is possible without assuming (or proving) lower bounds for NEXP. 4

References [1] Oded Goldreich and David Zuckerman. Another proof that BPP PH (and more). In Electronic Colloquium on Computational Complexity, 1997. [2] David Zuckerman. Simulating BPP using a general weak random source. Algorithmica, 16(4-5):367 391, 1996. 5