Lecture 18: Oct 15, 2014

Similar documents
Randomized Computation

1 Randomized Computation

Lecture 12: Randomness Continued

1 Randomized complexity

CS 151 Complexity Theory Spring Solution Set 5

Lecture 24: Randomized Complexity, Course Summary

Lecture 8 (Notes) 1. The book Computational Complexity: A Modern Approach by Sanjeev Arora and Boaz Barak;

: On the P vs. BPP problem. 30/12/2016 Lecture 12

Lecture 22: Oct 29, Interactive proof for graph non-isomorphism

: On the P vs. BPP problem. 30/12/2016 Lecture 11

Lecture 17. In this lecture, we will continue our discussion on randomization.

2 Natural Proofs: a barrier for proving circuit lower bounds

U.C. Berkeley CS278: Computational Complexity Professor Luca Trevisan 1/29/2002. Notes for Lecture 3

Notes for Lecture 3... x 4

Notes on Complexity Theory Last updated: November, Lecture 10

Lecture Examples of problems which have randomized algorithms

PCP Theorem and Hardness of Approximation

1 Circuit Complexity. CS 6743 Lecture 15 1 Fall Definitions

Meta-Algorithms vs. Circuit Lower Bounds Valentine Kabanets

2 Evidence that Graph Isomorphism is not NP-complete

6.841/18.405J: Advanced Complexity Wednesday, April 2, Lecture Lecture 14

A Note on the Karp-Lipton Collapse for the Exponential Hierarchy

1 From previous lectures

The Polynomial Hierarchy

Non-Approximability Results (2 nd part) 1/19

Identifying an Honest EXP NP Oracle Among Many

Theory of Computer Science to Msc Students, Spring Lecture 2

Efficient Probabilistically Checkable Debates

Lecture 10: Boolean Circuits (cont.)

The Cook-Levin Theorem

Lecture 12: Interactive Proofs

CSCI 1590 Intro to Computational Complexity

20.1 2SAT. CS125 Lecture 20 Fall 2016

CSC 5170: Theory of Computational Complexity Lecture 9 The Chinese University of Hong Kong 15 March 2010

Umans Complexity Theory Lectures

Complexity Theory. Jörg Kreiker. Summer term Chair for Theoretical Computer Science Prof. Esparza TU München

Umans Complexity Theory Lectures

Length-Increasing Reductions for PSPACE-Completeness

Lecture 15: Expanders

ITCS:CCT09 : Computational Complexity Theory Apr 8, Lecture 7

Lecture Lecture 3 Tuesday Sep 09, 2014

Lecture 3: Reductions and Completeness

Notes for Lecture 2. Statement of the PCP Theorem and Constraint Satisfaction

Instructor: Bobby Kleinberg Lecture Notes, 25 April The Miller-Rabin Randomized Primality Test

CSC 5170: Theory of Computational Complexity Lecture 5 The Chinese University of Hong Kong 8 February 2010

Lecture 9: Polynomial-Time Hierarchy, Time-Space Tradeoffs

b + O(n d ) where a 1, b > 1, then O(n d log n) if a = b d d ) if a < b d O(n log b a ) if a > b d

Handout 5. α a1 a n. }, where. xi if a i = 1 1 if a i = 0.

CS151 Complexity Theory

Lecture 7 Limits on inapproximability

Lecture 26: QIP and QMIP

Impagliazzo s Hardcore Lemma

Generalized Lowness and Highness and Probabilistic Complexity Classes

Instance Compression for the Polynomial Hierarchy and Beyond

Adleman Theorem and Sipser Gács Lautemann Theorem. CS422 Computation Theory CS422 Complexity Theory

Randomness and non-uniformity

Lecture 26: Arthur-Merlin Games

A Note on P-selective sets and on Adaptive versus Nonadaptive Queries to NP

Complexity Classes IV

Computational Complexity: A Modern Approach. Draft of a book: Dated January 2007 Comments welcome!

About the impossibility to prove P NP or P = NP and the pseudo-randomness in NP

Lecture 22: Derandomization Implies Circuit Lower Bounds

Kernel-Size Lower Bounds. The Evidence from Complexity Theory

Technische Universität München Summer term 2010 Theoretische Informatik August 2, 2010 Dr. J. Kreiker / Dr. M. Luttenberger, J. Kretinsky SOLUTION

Lecture 1 : Probabilistic Method

IS VALIANT VAZIRANI S ISOLATION PROBABILITY IMPROVABLE? Holger Dell, Valentine Kabanets, Dieter van Melkebeek, and Osamu Watanabe December 31, 2012

Beyond NP [HMU06,Chp.11a] Tautology Problem NP-Hardness and co-np Historical Comments Optimization Problems More Complexity Classes

Proving SAT does not have Small Circuits with an Application to the Two Queries Problem

6.045: Automata, Computability, and Complexity (GITCS) Class 17 Nancy Lynch

An introduction to basic information theory. Hampus Wessman

Randomized Complexity Classes; RP

Limitations of Efficient Reducibility to the Kolmogorov Random Strings

Questions Pool. Amnon Ta-Shma and Dean Doron. January 2, Make sure you know how to solve. Do not submit.

Review of Complexity Theory

Another proof that BPP PH (and more)

Lecture 6: Random Walks versus Independent Sampling

Umans Complexity Theory Lectures

Lecture 2. 1 More N P-Compete Languages. Notes on Complexity Theory: Fall 2005 Last updated: September, Jonathan Katz

16.1 Countability. CS125 Lecture 16 Fall 2014

Almost transparent short proofs for NP R

Lecture 3 (Notes) 1. The book Computational Complexity: A Modern Approach by Sanjeev Arora and Boaz Barak;

Each internal node v with d(v) children stores d 1 keys. k i 1 < key in i-th sub-tree k i, where we use k 0 = and k d =.

Uniform Derandomization

Quantum Computing Lecture 8. Quantum Automata and Complexity

Exponential time vs probabilistic polynomial time

Separating NE from Some Nonuniform Nondeterministic Complexity Classes

Lecture 6. Today we shall use graph entropy to improve the obvious lower bound on good hash functions.

Algorithms and Theory of Computation. Lecture 19: Class P and NP, Reduction

CS151 Complexity Theory. Lecture 15 May 22, 2017

Nondeterminism LECTURE Nondeterminism as a proof system. University of California, Los Angeles CS 289A Communication Complexity

Lecture 2: January 18

Lecture 9: Connecting PH, P/poly and BPP

Essential facts about NP-completeness:

Lecture Introduction. 2 Formal Definition. CS CTT Current Topics in Theoretical CS Oct 30, 2012

Lecture 3: Circuits and Karp-Lipton. Scribe: Anonymous Student Prof. Dana Moshkovitz

Lecture 26. Daniel Apon

Automata Theory. Lecture on Discussion Course of CS120. Runzhe SJTU ACM CLASS

Polynomial-Time Random Oracles and Separating Complexity Classes

Solution of Exercise Sheet 12

ECE 695 Numerical Simulations Lecture 2: Computability and NPhardness. Prof. Peter Bermel January 11, 2017

Transcription:

E0 224 Computational Complexity Theory Indian Institute of Science Fall 2014 Department of Computer Science and Automation Lecture 18: Oct 15, 2014 Lecturer: Chandan Saha <chandan@csa.iisc.ernet.in> Scribe: Bibaswan Kumar Chatterjee In this lecture we look at the class BPP and co-bpp, usefulness of randomness in computation and one-sided error randomized algorithms which are captured by classes RP and co-rp which are subsets of BPP. 18.1 Class BPP Definition 18.1. For T : N N and L {0, 1} we say that a PTM M decides L in time T (n) if for every x {0, 1}, M halts in T ( x ) regardless of its random choices and P r[m(x) = L(x)] 2 3. We define BPTIME(T (n)) as the class of languages decided by PTMs in O(T (n)) time and define BPP = BPTIME(n c ). c Definition 18.2. (Alternative Definiton of BPP) : A language L {0, 1} is in BPP if there is a deterministic polytime TM M and a polynomial function q(.) P r r R {0,1} q( x )[M(x, r) = L(x)] 2 3 It is easy to see the above two definitions are equivalent. Suppose a language L is in BPP by the first definition, it is easy to construct a TM M that given a random string r of length q( x ), M will simply simulate M(x) using the random bits in r and output M(x). Since L is in BPP then with probability 2 3 we can get r such that simulation of M(x) using r outputs L(x). Similarly, if L is in BPP by the second definition, then polynomial size random string r s.t. M(x, r) = L(x) with probability 2 3. We can think of a PTM M which randomly generates string r and then runs M(x, r). Clearly, P r[m(x) = L(x)] = P r[m(x, r) = L(x)] 2 3. Conjecture 18.3. BPP = P Claim 18.4. BPP EXP This is clear from the alternate definition of BPP because if we are allowed 2 poly(n) time, we can simply enumerate all possible random choices of a poly-time PTM. Researchers currently know that BPP is sandwiched between P and EXP but are unable to show that BPP is a proper subset of NEXP. Question: How does NP relate to BPP? The relation between BPP and NP is unknown. It is not known if BPP NP or NP BPP or neither. It is however known that BPP P /poly [Adl78], which implies (by Karp-Lipton Theorem), if NP BPP, then PH collapses. Another important result discovered is BPP Σ p 2 Πp 2 [Sip83][Laut83]. Definition 18.5. co-bpp: A language L {0, 1} is in co-bpp iff L BPP. 18-1

Lecture 18: Oct 15, 2014 18-2 Lemma 18.6. BPP = co-bpp. Proof. Let L {0, 1} be a language such that L co BP P. Then by the definition of co-bpp, L co BP P L BP P poly time P T M M s.t. P r[m(x) = L(x)] 2 3 poly time P T M M s.t. M(x) = M(x) & P r[m(x) = L(x)] 2 3 poly time P T M M s.t. P r[m(x) = L(x)] 2 3 L BP P Thus we see the class BPP is closed under complement. 18.2 Usefulness of Randomness in computation In this section we will explore how randomization can lead to simple algorithms for problems with very efficient run-time complexity. Definition 18.7. Expected running time: Let M be a P.T.M. that decides a language L {0, 1}. For a string x {0, 1} let T x be the time taken by M to decide if x L where T x is a random variable. We say the expected running-time of M is T (n) iff E[T x ] T ( x ) for every x {0, 1}. 18.2.1 Finding the k th smallest element in an unsorted array It is known that selecting the k th smallest element in an unsorted array can be done in O(n) time using the linear time selection algorithm[clrs01]. But its analysis is quite complicated and it is also quite difficult to implement in practice. Here we give a simple linear time randomized algorithm for selecting the k th smallest element in an unsorted array which is much simple to implement as well as analyze. Find(A,k) Input: A = {a 1,..., a n } Z n, k Z + and k < n Output: the k th smallest element in A 1: Pick i uniformly at random from [n] 2: DIVIDE A into three parts: A 1 = {a j A/{a i } AND a j a i }, A 2 = {a j A/{a i } AND a j > a i } and element a i. Let m = A 1. 3: If m is k 1 then output a i 3: Else If m k then 4: Call Find(A 1, k) 5: Else 6: Call Find(A 2, k m)

Lecture 18: Oct 15, 2014 18-3 Theorem 18.8. The expected running-time of Find(A, k) is O(n) where n = A. Proof. Let T (n) be the running time of Find(A, k). Let us define an indicator variable, I j = 1 if j = m (m defined in our algorithm) = 0 otherwise So, j [n] E [I j ] = P r(m = j) = 1 n Then, where c is a constant. T (n) cn + j k E[T (n)] cn + j k E[T (n)] cn + j k [I j T (j)] + [E [I j ] E [T (j)]] + [I j T (n j)] [ 1 E[T (j)]] + [ 1 E[T (n j)]] n n [E [I j ] E [T (n j)]] Now here we make an inductive assumption that our E[T (n)] = αcn where α is some constant > 1. We can assume this is to be trivially true for E[T (1)]. We show that if our assumption is true for T (j) where j < n, then its true for T (n). Going back to our proof, E[T (n)] cn + 1 n [ E[T (j)] + E[T (n j)]] j k E[T (n)] cn + αc n [ j + (n j)] j k E[T (n)] cn + αc + 1) k(k 1) n(k 1) [n(n + (k 1)n ] n 2 2 2 E[T (n)] cn + αc + (2k 1)n 2k 2 2k n [n2 ] 2 E[T (n)] cn + αc (α 1) n [n2 ] for some large n > n 0 α E[T (n)] cn + (α 1)cn E[T (n)] αcn E[T (n)] = O(n) Remark: Whether or not random bits are absolutely indispensable depends on the context. For cases like

Lecture 18: Oct 15, 2014 18-4 Interactive Protocols Probabilistically checkable Proofs random bits are absolutely essential. Next we will explore an example from coding theory where randomization is very useful. 18.2.2 Hadamard Codes Hadamard Codes is an error-correcting code that is used for error detection and correction when transmitting messages over noisy or unreliable channels. HadamardCode(x) Input: x {0, 1} k Output: y {0, 1} 2k for i = 0, 1,..., 2 k 1 y i = end x j i j=1 Now let us look at the problem of recovering each bit x i of x given a possibly corrupted version of y = HadamardCode(x) since we are looking at the context of transmission in noisy channel. Let us consider that ρ fraction of the bits in y are corrupted. Our task is to recover each bit x i making as few reads of the bits of y as possible. Decode(y,i) Input: y {0, 1} 2k where y is HadamardCode(x) with atmost ρ fraction of bits corrupted and i [k] Output: x i such that P r[x i = x i ] with high probability 1: Pick a random subset s of [n] 2: Read y s from y 3: Read y s {i} from y 4: Output x i = y s y s {i} Analysis: If y s and y s {i} are not corrupted then, y s = x j j s y s {i} = j s {i} x j y s y s {i} = x i

Lecture 18: Oct 15, 2014 18-5 This is the ideal case. We have assumed that possibly ρ fraction of the bits in y are corrupted. P r[y s is corrupted ] ρ P r[y s {i} is corrupted ] ρ P r[y s or y s {i} is corrupted ] 2ρ (by union bound) Thus, P r[x i = x i ] (1 2ρ) which is high provided ρ is small. 18.3 Class RP and co-rp The class BPP captures probabilistic algorithms with two sided error. A PTM M that computes a language L BP P can output 1 when the input string does not belong to L and output 0 when the input string does belong to L with some small probability. However, there are probabilistic algorithms which have one-sided error. Definition 18.9. For T : N N and L {0, 1} we say that L RTIME(T (n)) if a PTM M such that for every x {0, 1}, M halts in T ( x ) time and x L P r[m(x) = 1] 2 3 x / L P r[m(x) = 0] = 1 Definition 18.10. Class RP: RP = c RT IME(n c ) Definition 18.11. Class co-rp: A language L co RP iff L RP It is clear form the definition that both RP and co-rp are subsets of BPP. Recall the Polynomial Identity Testing problem in the last lecture. PIT: L = {C : the polynomial computed by circuit C is identically 0}. The algorithm which was discussed was such that for x {0, 1} x L P r[m(x) = 1] = 1 This shows P IT co RP. x / L P r[m(x) = 0] 2 3 References [adl78] [Sip83] ADLEMAN, LEONARD, Two theorems on random polynomial time. 2013 IEEE 54th Annual Symposium on Foundations of Computer Science. IEEE,, 1978 SIPSER, MICHAEL, A complexity theoretic approach to randomness Proceedings of the 15th ACM Symposium on Theory of Computing (ACM Press): 330335., 1983

Lecture 18: Oct 15, 2014 18-6 [Laut83] LAUTEMANN, CLEMENS BPP and the polynomial hierarchy Inf. Proc. Lett. 17 (4): 215217., 1983 [CLRS01] THOMAS H. CORMEN, CHARLES E. LEISERSON, RONALD L. RIVEST and CLIFFORD STEIN, Introduction to Algorithms, Second Edition. MIT Press and McGraw-Hill, 2001. ISBN 0-262-03293-7. Chapter 9: Medians and Order Statistics, pp.183196. 2001