Topics in Probabilistic Combinatorics and Algorithms Winter, Basic Derandomization Techniques

Similar documents
Randomness and Computation March 13, Lecture 3

Basic Derandomization Techniques

Basic Derandomization Techniques

6.842 Randomness and Computation Lecture 5

Lecture Notes CS:5360 Randomized Algorithms Lecture 20 and 21: Nov 6th and 8th, 2018 Scribe: Qianhang Sun

Lecture 5. 1 Review (Pairwise Independence and Derandomization)

Stanford University CS254: Computational Complexity Handout 8 Luca Trevisan 4/21/2010

Lecture 5: Two-point Sampling

Lecture 2 Sep 5, 2017

Lecture 23: Alternation vs. Counting

1 Randomized Computation

CSE 190, Great ideas in algorithms: Pairwise independent hash functions

1 Computational Problems

20.1 2SAT. CS125 Lecture 20 Fall 2016

Notes for Lecture Notes 2

Randomized Complexity Classes; RP

Notes for Lecture 3... x 4

Lecture 4: Two-point Sampling, Coupon Collector s problem

Polynomial Identity Testing and Circuit Lower Bounds

1 Randomized complexity

Lecture 12: Randomness Continued

Lecture 24: Randomized Complexity, Course Summary

Expectation, inequalities and laws of large numbers

Polynomial Time Computation. Topics in Logic and Complexity Handout 2. Nondeterministic Polynomial Time. Succinct Certificates.

CS 151 Complexity Theory Spring Solution Set 5

Lecture Examples of problems which have randomized algorithms

Randomness and non-uniformity

Umans Complexity Theory Lectures

Lecture 12: Interactive Proofs

Model Counting for Logical Theories

Uniform Derandomization

Topics in Probabilistic Combinatorics and Algorithms Winter, 2016

1 Agenda. 2 History. 3 Probabilistically Checkable Proofs (PCPs). Lecture Notes Definitions. PCPs. Approximation Algorithms.

Testing Monotone High-Dimensional Distributions

Polynomial Representations of Threshold Functions and Algorithmic Applications. Joint with Josh Alman (Stanford) and Timothy M.

Non-Approximability Results (2 nd part) 1/19

CSCI8980 Algorithmic Techniques for Big Data September 12, Lecture 2

Randomized Computation

Notes on Complexity Theory Last updated: December, Lecture 27

Handout 5. α a1 a n. }, where. xi if a i = 1 1 if a i = 0.

Problem Set 2. Assigned: Mon. November. 23, 2015

15-855: Intensive Intro to Complexity Theory Spring Lecture 16: Nisan s PRG for small space

Lecture 26: Arthur-Merlin Games

Randomness-Optimal Oblivious Sampling

Lecture notes on OPP algorithms [Preliminary Draft]

Lecture 18: PCP Theorem and Hardness of Approximation I

Introduction Long transparent proofs The real PCP theorem. Real Number PCPs. Klaus Meer. Brandenburg University of Technology, Cottbus, Germany

Baire categories on small complexity classes and meager comeager laws

Computational Complexity Theory

Lecture 28: Generalized Minimum Distance Decoding

Notes for Lecture 2. Statement of the PCP Theorem and Constraint Satisfaction

Lectures One Way Permutations, Goldreich Levin Theorem, Commitments

Some notes on streaming algorithms continued

2 Natural Proofs: a barrier for proving circuit lower bounds

Lecture 2: Streaming Algorithms

Complete problems for classes in PH, The Polynomial-Time Hierarchy (PH) oracle is like a subroutine, or function in

In Search of an Easy Witness: Exponential Time vs. Probabilistic Polynomial Time

CS6999 Probabilistic Methods in Integer Programming Randomized Rounding Andrew D. Smith April 2003

The Parameterized Complexity of Approximate Inference in Bayesian Networks

Problem 1: (Chernoff Bounds via Negative Dependence - from MU Ex 5.15)

Lecture 1 : Probabilistic Method

CS261: A Second Course in Algorithms Lecture #18: Five Essential Tools for the Analysis of Randomized Algorithms

P vs. NP Classes. Prof. (Dr.) K.R. Chowdhary.

6.045 Final Exam Solutions

Lecture 17: Interactive Proof Systems

CSC 5170: Theory of Computational Complexity Lecture 5 The Chinese University of Hong Kong 8 February 2010

U.C. Berkeley CS278: Computational Complexity Professor Luca Trevisan 1/29/2002. Notes for Lecture 3

On the Randomness Complexity of. Efficient Sampling. Bella Dubrov

Lecture 24: Approximate Counting

Lecture Notes 3 Convergence (Chapter 5)

: On the P vs. BPP problem. 30/12/2016 Lecture 12

Baire Categories on Small Complexity Classes and Meager-Comeager Laws

A Threshold of ln(n) for approximating set cover

Derandomizing from Random Strings

On the NP-Completeness of the Minimum Circuit Size Problem

CS Communication Complexity: Applications and New Directions

IITM-CS6845: Theory Toolkit February 3, 2012

: On the P vs. BPP problem. 30/12/2016 Lecture 11

Data Streams & Communication Complexity

Exponential time vs probabilistic polynomial time

Lecture 23: More PSPACE-Complete, Randomized Complexity

A note on exponential circuit lower bounds from derandomizing Arthur-Merlin games

Input Locality and Hardness Amplification

Almost transparent short proofs for NP R

CSC2420 Fall 2012: Algorithm Design, Analysis and Theory Lecture 9

Lecture 4: Hashing and Streaming Algorithms

Ma/CS 117c Handout # 5 P vs. NP

Lecture 4. P r[x > ce[x]] 1/c. = ap r[x = a] + a>ce[x] P r[x = a]

Probabilistically Checkable Proofs. 1 Introduction to Probabilistically Checkable Proofs

Computer Sciences Department

Non-Interactive Zero Knowledge (II)

Notes on Complexity Theory Last updated: November, Lecture 10

Reproduced without access to the TeX macros. Ad-hoc macro denitions were used instead. ON THE POWER OF TWO-POINTS BASED SAMPLING

Lecture 20: Goemans-Williamson MAXCUT Approximation Algorithm. 2 Goemans-Williamson Approximation Algorithm for MAXCUT

Randomized Algorithms

Lecture 2 Sept. 8, 2015

The space complexity of approximating the frequency moments

Chapter 5 : Randomized computation

Lecture 18: Oct 15, 2014

6.842 Randomness and Computation March 3, Lecture 8

Transcription:

Topics in Probabilistic Combinatorics and Algorithms Winter, 016 3. Basic Derandomization Techniques Definition. DTIME(t(n)) : {L : L can be decided deterministically in time O(t(n)).} EXP = { L: L can be determined in time O ( nc )} BPP : probability polynomial time algorithm A such that: x L P r[ A(x) accepts] 1 3. x / L P r[ A(x) accepts] 1 3. Determination Algorithm (I) Exponetial / Logorithm Proposition. BPP EXP. Proof. L : BPP, A: require time t(n) = n c1. For x {0, 1} n and coin tosses r {0, 1} m, m t(n), P x [ A(x, r) accepts ] = 1 n can compute this in exponential time. Then L DTIME ( t(n) t(n)) L EXP. r {0,1} m A(x, r) Remark. If t(n) = O(log n), L DTIME (t(n)n c ) P. (II) Nondeterminatic Method Lemma 1. If A(x, r) is a randomized algorithm for a language L with error probability smaller than n on accept x with length n. Here, for every x, a fixed sequence of coin tosses r n such that A(x, r n ) is correct for all x {0, 1} n. Proof. R n R {0, 1} r. P r Rn [ x {0, 1} n such that A(x, R n ) incorrect ] x P r Rn [ A(x, R n ) incorrect ] < n n = 1 Thus, there exists R n = r n such that A(x, r n ) correct for all x. 1

Definition. P: solve the problem in polynomial time. NP: verify the problem in polynomial time. Example: Hamiltonian cycle: G: n vertices, find a simple cycle of length n. Theorem. RP NP. Proof. L RP, randomized polynomial time algorithm A: x L P r[ A(x) accepts] 1 x / L P r[ A(x) accepts] = 0. the error probability n. a fixed sequence of coin tosses r n such that A(x, r n ) accept L NP. (III) The Method of Conditional Expections MAXCUT Problem: Given G = (V, E). Find a partition V = S T, S T = subject to maximize cut(s, t) = {{u, v} E, u S, v T }. Algorithm: (randomized M AXCU T approximation) Input: G : (V, E), no loop. Flip n coins r 1,, r n, put i in S if x i = 0, put i in T if x i = 1. Output: (S, T ). Remark. Expectation: E ( cut(s, T ) ) = E Proof. Conditional exceptation after choosing r 1,, r i, e(r 1, r,, r i ) = E[ cut(s, T ) R 1 = r 1,, R i = r i ] e( ) = E e(r 1, r,, r i ) = cut(s i, T i ) + 1 cut(u, [n]) where U = {i + 1,, n}. Choose r i+1 to maximize cut(s, T ), e(r 1,, r i+1 ) e(r 1,, r i ) e( ) = E Algorithm (deterministic M AXCU T approximation) Input: G : ([n], E)

(1) Set S =, T =. () For i = 1,,, n: (a) If cut({i + 1}, T ) > cut({i + 1}, S), set S S {i + 1}. (b) Else, set T T {i + 1}. Pairwise Independence Hashing Problem: S [N], Query: { Is x S? 1 if x S Simple solution: Use table T, T [x] = 0 otherwise. Determine hash function: h : [N] [N] T : a table of size M, set T [h(x)] = x if x S (So, to test y S compute h(y) and chcek if T [h(y)] = y.) Well-define if h s is 1 1. H : [N] [M]: randomize. For k N, S with S k, P r[ x y such that H(x) = H(y) ] P r[ H(x) = H(y) ] x y,x S,y S ( ) k 1 M < ɛ k if M Remark. X, Y : pairwise independent iff x X, b Y P r[x = a, Y = b] = P r[x = a]p r[y = b]. Definition. H : {h : [N] [M]} is pairwise independent if x 1 x [N], y 1 y [M], H H P r[ H(x 1 ) = y 1 H(x ) = y ) = 1 M (1) Equivalently, (1) is same as the following two conditions: (i) x [N], the random varible H(x) is uniformly distribution in [M]. (ii) x 1 x [N], the random varible H(x 1 ), H(x ) are pairwsie independent. Example 3. F is a finite field, H = {h a,b : F F}, where h a,b (x) = ax + b. Proposition. The family of functions H in Example 3 is pairwise independent. 3

Proof. Let x 1 x, y 1 y, { ax1 + b = y 1 ax + b = y solve a, b uniquely. P r H H [ H a,b (x 1 ) = y 1 H a,b (x ) = y ] = 1 M Randomness-Efficient Error Reduction and Sampling Lemma 4. (Chebyshevs Inequality) If X is a random variable with expectation µ, then P r[ X E(X) ɛ ] V ar[x] ɛ Proposition. (Pairwise Independent Tail Inequality) Let X 1,, X t be pairwise independent random variables taking values in the interval [0, 1], and let X = ( i X i)/t. Then Proof. tɛ V ar(x) = 1 t E [ t i=1 (X i E(X)) ] Then use Chebyshev inequlity, = 1 t E[ (X i E(X)) (X j E(X)) ] i j t E (X i E(X)) E (X j E(X)) i j t E (X i E(X)) i t V ar (X i ) 1 t. i tɛ Example 5 (Error Reduction). Proposition tells us that if we use t = O( k ) pairwise independent repetitions, we can reduce the error probability of a BPP algorithm from 1/3 to k. If the original BPP algorithm uses m random bits, then we can do this by choosing h : {0, 1} k+o(1) {0, 1} m at random from a 4

pairwise independent family, and running the algorithm using coin tosses h(x) for all x {0, 1} k+o(1). This requires m + max{m, k + O(1)} = O(m + k) random bits. Number of Repetitions Number of Random Bits Independent Repetitions O(k) O(km) Pairwise Independent Repetitions O( k ) O(m + k) Example 6 (Sampling). Given oracle access to a function f : {0, 1} m [0, 1], we want to approximate E(f) to within an additive error of ɛ. We use Chernoff s inequality for X = this requires t = O t i=1 Xi t and we have P r[ X E(X) > ɛ ] < e tɛ /4 = δ. ( log 1 ) δ ɛ with error probability ɛ. Number of Samples Number of Random Bits Truly Random Sample O((1/ɛ ) log(1/δ)) O(m (1/ɛ ) log(1/δ)) Pairwise Independent Repetitions O((1/ɛ ) (1/δ)) O(m + log(1/ɛ) + log(1/δ)) Proposition. X 1,, X t : k-pairwise independent random varibles, taking values in [0, 1], X = t i=1 t, t k ɛ k Example 7 (Construction k-wise independence from polynomials.). Let F be a finite filed. Define the family of functions H = {h a0,a 1,,a k 1 :F F} where each h a0,a 1,,a k 1 (x) = a 0 +a 1 x+ +a k 1 x k 1 for a 0, a 1,, a k 1 F. 5