Handout 7. and Pr [M(x) = χ L (x) M(x) =? ] = 1.

Similar documents
Notes on Complexity Theory Last updated: November, Lecture 10

Handout 5. α a1 a n. }, where. xi if a i = 1 1 if a i = 0.

Lecture 5: Logspace reductions and completeness

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices

Umans Complexity Theory Lectures

Lecture 12: Randomness Continued

Lecture 3. 1 Terminology. 2 Non-Deterministic Space Complexity. Notes on Complexity Theory: Fall 2005 Last updated: September, 2005.

1 Generalization bounds based on Rademacher complexity

Lecture 17. In this lecture, we will continue our discussion on randomization.

U.C. Berkeley CS278: Computational Complexity Professor Luca Trevisan 9/6/2004. Notes for Lecture 3

CSE525: Randomized Algorithms and Probabilistic Analysis May 16, Lecture 13

Notes on Space-Bounded Complexity

Notes for Lecture 3... x 4

Notes on Space-Bounded Complexity

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search

DSPACE(n)? = NSPACE(n): A Degree Theoretic Characterization

Math 262A Lecture Notes - Nechiporuk s Theorem

1 Randomized complexity

Randomized Computation

1 Proof of learning bounds

The Frequent Paucity of Trivial Strings

1 Rademacher Complexity Bounds

A Quantum Observable for the Graph Isomorphism Problem

Lecture 22: Counting

The Weierstrass Approximation Theorem

This model assumes that the probability of a gap has size i is proportional to 1/i. i.e., i log m e. j=1. E[gap size] = i P r(i) = N f t.

Lecture 6: Oracle TMs, Diagonalization Limits, Space Complexity

1 Randomized Computation

Combining Classifiers

CSC 5170: Theory of Computational Complexity Lecture 5 The Chinese University of Hong Kong 8 February 2010

Analyzing Simulation Results

Notes for Lecture 3... x 4

L S is not p m -hard for NP. Moreover, we prove for every L NP P, that there exists a sparse S EXP such that L S is not p m -hard for NP.

Lecture 2. 1 More N P-Compete Languages. Notes on Complexity Theory: Fall 2005 Last updated: September, Jonathan Katz

List Scheduling and LPT Oliver Braun (09/05/2017)

Probability and Stochastic Processes: A Friendly Introduction for Electrical and Computer Engineers Roy D. Yates and David J.

Solutions of some selected problems of Homework 4

Computability and Complexity Random Sources. Computability and Complexity Andrei Bulatov

Lecture 59 : Instance Compression and Succinct PCP s for NP

Randomized Complexity Classes; RP

Lecture 20: Goemans-Williamson MAXCUT Approximation Algorithm. 2 Goemans-Williamson Approximation Algorithm for MAXCUT

Theory of Computer Science to Msc Students, Spring Lecture 2

Lecture 23: Alternation vs. Counting

1 Proving the Fundamental Theorem of Statistical Learning

Kinematics and dynamics, a computational approach

1 Identical Parallel Machines

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis

Lecture 21 Principle of Inclusion and Exclusion

Lecture 24: Randomized Complexity, Course Summary

Reversibility of Turing Machine Computations

Lecture 9 November 23, 2015

Lecture Examples of problems which have randomized algorithms

: On the P vs. BPP problem. 30/12/2016 Lecture 11

Bloom Filters. filters: A survey, Internet Mathematics, vol. 1 no. 4, pp , 2004.

Lecture Notes Each circuit agrees with M on inputs of length equal to its index, i.e. n, x {0, 1} n, C n (x) = M(x).

Block designs and statistics

arxiv: v1 [cs.ds] 17 Mar 2016

Lecture 2 Sep 5, 2017

U.C. Berkeley CS278: Computational Complexity Professor Luca Trevisan 1/29/2002. Notes for Lecture 3

Lean Walsh Transform

Lecture 12: Interactive Proofs

Lecture 21. Interior Point Methods Setup and Algorithm

Space is a computation resource. Unlike time it can be reused. Computational Complexity, by Fu Yuxi Space Complexity 1 / 44

Finite fields. and we ve used it in various examples and homework problems. In these notes I will introduce more finite fields

CS294: Pseudorandomness and Combinatorial Constructions September 13, Notes for Lecture 5

Chapter 5 : Randomized computation

Lecture October 23. Scribes: Ruixin Qiang and Alana Shine

Kernel Methods and Support Vector Machines

Umans Complexity Theory Lectures

Pr[X = s Y = t] = Pr[X = s] Pr[Y = t]

Lecture 4 : Quest for Structure in Counting Problems

Understanding Machine Learning Solution Manual

ma x = -bv x + F rod.

Lecture 15: Expanders

The Simplex Method is Strongly Polynomial for the Markov Decision Problem with a Fixed Discount Rate

Lecture 21: Space Complexity (The Final Exam Frontier?)

Physically Based Modeling CS Notes Spring 1997 Particle Collision and Contact

1 Bounding the Margin

Lecture 4. 1 Circuit Complexity. Notes on Complexity Theory: Fall 2005 Last updated: September, Jonathan Katz

Feature Extraction Techniques

CSE200: Computability and complexity Space Complexity

Improved Guarantees for Agnostic Learning of Disjunctions

Reducibility and Completeness

On Process Complexity

In this chapter, we consider several graph-theoretic and probabilistic models

3.8 Three Types of Convergence

Computational Complexity Theory

Complexity Theory VU , SS The Polynomial Hierarchy. Reinhard Pichler

MTAT Complexity Theory October 13th-14th, Lecture 6

A An Overview of Complexity Theory for the Algorithm Designer

Outline. Complexity Theory EXACT TSP. The Class DP. Definition. Problem EXACT TSP. Complexity of EXACT TSP. Proposition VU 181.

Testing Properties of Collections of Distributions

Theoretical Computer Science

N-Point. DFTs of Two Length-N Real Sequences

Prerequisites. We recall: Theorem 2 A subset of a countably innite set is countable.

A note on the multiplication of sparse matrices

6.841/18.405J: Advanced Complexity Wednesday, February 12, Lecture Lecture 3

lecture 36: Linear Multistep Mehods: Zero Stability

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon

3.3 Variational Characterization of Singular Values

Transcription:

Notes on Coplexity Theory Last updated: October, 2005 Jonathan Katz Handout 7 1 More on Randoized Coplexity Classes Reinder: so far we have seen RP,coRP, and BPP. We introduce two ore tie-bounded randoized coplexity classes: ZPP and PP. 1.1 ZPP ZPP ay be defined in various ways; we will pick one arbitrarily and state the others (and their equivalence) as clais. Definition 1 Class ZPP consists of languages L for which there exists a ppt achine M which is allowed to output 1, 0, or? and such that, for all x: Pr [M(x) =? ] 1 2 and Pr [M(x) = χ L (x) M(x) =? ] = 1. That is, M always outputs either the correct answer or a special sybol? denoting don t know, and it outputs don t know with probability at ost half. Clai 1 ZPP = RP corp and hence ZPP BPP. Clai 2 ZPP consists of languages L for which there exists a achine M running in expected polynoial tie and for which: Pr[M(x) = χ L (x)] = 1. (A achine runs in expected polynoial tie if there exists a polynoial p such that for all x, the expected running tie of M(x) is at ost p( x ).) Suarizing what we have so far: P ZPP RP BPP. However (soewhat surprisingly), it is currently believed that P = BPP. 1.2 PP RP, corp, BPP, and ZPP represent possibly-useful relaxations of deterinistic polynoialtie coputation. The next class does not, as we will see. Definition 2 L PP if there exists a ppt achine M such that: Pr[M(x) = χ L (x)] > 1 2. 7-1

Note that requiring only Pr[M(x) = χ L (x)] 1 2 akes the definition trivial, as it can be achieved by flipping a rando coin. Note also that error reduction does not apply to PP, since to increase the gap between acceptance and rejection to an inverse polynoial we ight have to run our original algorith an exponential nuber of ties (and so we would no longer have a ppt algorith). Clai 3 BPP PP PSPACE. The first inclusion follows iediately fro the definitions. The second inclusion follows since, given a ppt Turing achine M, we ay enuerate all coins (and take a ajority vote) in PSPACE. The following shows that PP is too lax a definition: Clai 4 N P PP. Proof Let L N P, and assue that inputs x L have witnesses of size exactly p( x ) (by padding, this is w.l.o.g.). Consider the following PP algorith for L on input x: with probability 1/2 2 p( x ) /4, accept. Otherwise, choose a string w {0,1} p( x ) at rando and accept iff w is a witness for x. Clearly, if x L then the probability that this algorith accepts is less than 1/2. On the other hand, if x L then there is at least one witness and so the probability of acceptance is at least: 1 2 2 p( x ) /4 + 2 p( x ) /2 > 1 2. 2 BPP in Relation to Deterinistic Coplexity Classes BPP is currently considered to be the right notion of what is efficiently coputable. It is therefore of interest to see exactly how powerful this class is. 2.1 BPP P/poly We claied in an earlier lecture that P/poly provides an upper bound to efficient coputation. We show that this is true with regard to BPP. Theore 5 BPP P/poly. Proof Let L BPP. We know that there exists a probabilistic polynoial-tie Turing achine M and a polynoial p such that M uses a rando tape of length p( x ) and Pr r [M(x;r) χ L (x)] < 2 2 x 2. An equivalent way of looking at this is that for any n and each x {0,1} n there is a set of bad coins for x (for which M(x) returns the wrong answer), but the size of this bad set is saller than 2 p(n) 2 2n2. Taking the union of these bad sets over all x {0,1} n, we find that the total nuber of rando coins which are bad for soe x {0,1} n is at ost 2 p(n) 2 n 2n2 < 2 p(n). In particular, for each n there exists at least one rando tape rn {0,1} p(n) which is good for every x {0,1} n (in fact, there are any such rando tapes). If we let the sequence of advice strings be exactly these {rn} n N, we obtain the result of the theore. 7-2

2.2 BPP is in the Polynoial Hierarchy We noted earlier that it is not known whether BPP N P. However, we can place BPP in the polynoial hierarchy. Here, we give two different proofs that BPP Σ 2 Π 2. 2.2.1 Lauteann s Proof We first prove soe easy propositions. For S {0,1}, say S is large if S (1 1 ) 2. Say S is sall if S < 2. Finally, for a string z {0,1} define S z def = {s z s S}. We first prove: Proposition 6 Let S {0,1} be sall. Then for all z 1,...,z {0,1} we have (S z i) {0,1}. This follows easily by counting. On the one hand, {0,1} = 2. On the other hand, for any z 1,...,z we have (S z i ) S z i Furtherore: Proposition 7 If S is large, then: [ Pr z 1,...,z {0,1} = S < 2. ] ( 2 (S z i ) = {0,1} > 1 To see this, consider first the probability that soe fixed y is not in i (S z i). This is given by: ). Pr z 1,...,z {0,1} [y i (S z i )] = Pr z i {0,1} [y S z i] ( ) 1. Applying a union bound over all y {0,1}, we see that the probability that there exists a y {0,1} which is not in i (S z i) is at ost 2. The proposition follows. Given L BPP, there exist a polynoial and an algorith M such that M uses ( x ) rando coins and errs with probability less than 1/( x ). For any x {0,1} n, let S x {0,1} ( x ) denote the set of rando coins for which M(x;r) outputs 1. Thus, if x L (and letting = ( x )) we have S x (1 1 ) 2 while if x L then S x < 2. This leads to the following Σ 2 characterization of L: x L z 1,...,z {0,1} y {0,1} : y (S x z i ). 7-3

Note that the desired condition can be efficiently verified by checking whether M(x;y z i ). i We conclude that BPP Σ 2. Using the fact that BPP is closed under copleent gives the claied result. 2.2.2 The Sipser-Gács Proof The proof in the previous section is a bit easier than the one we present here, but the technique here is quite useful. First, soe notation. Let h : {0,1} R be a function, let S {0,1}, and let s S. We say that h isolates s if h(s ) h(s) for all s S \ {s}. We say a collection of functions H def = {h 1,...,h l } isolates s if there exists an h i H which isolates s. Finally, we say H isolates S if H isolates every eleent of S. Proposition 8 Let H be a faily of pairwise-independent hash functions apping {0,1} to soe set R. Let S {0,1}. Then If S > R then: If S +1 R then: Pr [{h 1,...,h } isolates S] = 0. h 1,...,h H Pr [{h 1,...,h } isolates S] > 0. h 1,...,h H Proof For the first part of the proposition, note that each hash function h i can isolate at ost R eleents. So hash functions can isolate at ost R eleents. Since S > R, the clai follows. For the second part, assue S is non-epty (if S is epty then the theore is trivially true). Consider the probability that a particular s S is not isolated: ( ) Pr [{h 1,...,h } does not isolate s] = Pr [h i does not isolate s] h 1,...,h H h i H Pr[h(s ) = h(s)] h < s S\{s} ( ) S. R Suing over all s S, we see that the probability that S is not isolated is less than 1. This gives the stated result. S +1 R Let L BPP. Then there exists a achine M using ( x ) rando coins which errs with probability less than 1/4( x ). Let x {0,1} n, set = ( x ), and define S x as in the previous section. Set R = 2 /2. Note that if x L then S x 3 2 /4 > R. On 7-4

the other hand, if x L then S x < 2 /4 and so Sx +1 R < 1/4. The above proposition thus leads to the following Π 2 characterization of L: x L h 1,...,h H s,s 1,...,s : i ((s s i ) (h i (s) = h i (s i )) (s,s i S x )) (note that ebership in S x can be verified in polynoial tie, as in the previous section). Closure of BPP under copleent gives the stated result. 3 Randoized Space Classes An iportant note regarding randoized space classes is that we do not allow the achine to store its previous rando coin flips for free (it can, if it chooses, write its rando choice(s) on its work tape). If we consider the odel in which a randoized Turing achine is siply a Turing achine with access to a rando tape, this iplies that we allow only unidirectional access to the rando tape. There is an additional subtlety regarding the definition of randoized space classes: we need to also bound the running tie. In particular, we will define rspace(s(n)) as follows: Definition 3 A language L is in rspace(s(n)) if there exists a randoized Turing achine M using s(n) space and 2 O(s(n)) tie and such that x L Pr[M(x) = 1] 1 2 and x L Pr[M(x) = 0] = 1. Without the tie restriction, we get a class which is too powerful: Proposition 9 Define rspace as above, but without the tie restriction. Then for any space-constructible s(n) log n we have rspace (s(n)) = nspace(s(n)). Proof (Sketch) Showing that rspace (s(n)) nspace(s(n)) is easy. We turn to the other direction. The basic idea is that, given a language L nspace(s(n)), we construct a achine which on input x guesses valid witnesses for x (where a witness here is an accepting coputation of the non-deterinistic achine on input x). Since there ay only be a single witness, we guess a doubly-exponential nuber of ties. This is where the absence of a tie bound akes a difference. In ore detail, given L as above we know that any x L has a witness (i.e., an accepting coputation) of length at ost l(n) = 2 O(s(n)). Assuing such a witness exists, we can guess it with probability at least 2 l(n) (and verify whether we have a witness or not using space O(s(n))). Equivalently, the expected nuber of ties until we guess the witness is 2 l(n). The intuition is that if we guess 2 l(n) witnesses, we have a good chance of guessing a correct one. Looking at it in that way, the proble boils down to ipleenting a counter that can count up to 2 l(n). The naive idea of using a standard l(n)-bit counter will not work, since l(n) is exponential in s(n)! Instead, we use a randoized counter: each tie after guessing a witness, flip l(n) coins and if they are all 0 stop; otherwise, continue. This can be done using a counter of size log l(n) = O(s(n)). 7-5

Note that the Turing achine thus defined ay run for infinite tie; however, the probability that it does so is 0. In any case, it never uses ore than s(n) space, as required. Furtherore (using the fact that infinite runs occur with probability 0) the achine satisfies x L Pr[M(x) = 1] 1 2 and x L Pr[M(x) = 0] = 1. 3.1 RL Define RL = rspace(log n). We show that undirected graph connectivity is in RL (here, we are given an undirected graph and vertices s,t and asked to decide whether there is a path fro s to t). This follows easily fro the following result: Theore 10 Let G be an n-vertex undirected graph, and s an arbitrary vertex in G. A rando walk of length 4n 3 beginning at s visits all vertices in the connected coponent of s with probability at least 1/2. Proof In the next lecture we will discuss Markov chains and rando walks on undirected graphs, and will show that if G is non-bipartite then for any edge (u,v) in the connected coponent containing s, the expected tie to ove fro vertex u to vertex v is at ost 2 E + n n 2. (Note that if G is bipartite, we can ake it non-bipartite by entally adding n self-loops; the expected tie to ove fro u to v is then at ost 2 E + n as claied. But taking a rando walk in the actual graph (without self-loops) can only result in a lower expected tie to ove fro u to v.) Consider any spanning tree of the connected coponent containing s; this will contain n 1 edges. Considering any traversal of this spanning tree (which traverses fewer than 2n edges), we see that the expected tie to reach every vertex in the connected coponent is at ost 2n n 2 = 2n 3. Taking a rando walk for twice this any steps eans we will reach the entire coponent with probability at least half. We reark that the analogous result does not hold for directed graphs. (If it did, we would have RL = N L which is considered unlikely. To be clear: it is possible that soe other algorith can solve directed connectivity in RL, but the above algorith does not.) Next tie, we will see another application of rando walks to solving 2-SAT. Bibliographic Notes Sections 1 and 3 are adapted fro [2, Lecture 7], and Section 2.2 fro [1, Lectures 19 20]. References [1] J.-Y. Cai. Scribe notes for CS 810: Introduction to Coplexity Theory. 2003. [2] O. Goldreich. Introduction to Coplexity Theory (July 31, 1999). 7-6