Lecture 2: Proof of Switching Lemma

Similar documents
CS6840: Advanced Complexity Theory Mar 29, Lecturer: Jayalal Sarma M.N. Scribe: Dinesh K.

CSC 2429 Approaches to the P vs. NP Question and Related Complexity Questions Lecture 2: Switching Lemma, AC 0 Circuit Lower Bounds

1 AC 0 and Håstad Switching Lemma

Classes of Boolean Functions

Comp487/587 - Boolean Formulas

Notes for Lecture 25

Lecture 15 - NP Completeness 1

Induction on Failing Derivations

On the correlation of parity and small-depth circuits

Propositional Calculus - Semantics (3/3) Moonzoo Kim CS Dept. KAIST

1 Basic Definitions. 2 Proof By Contradiction. 3 Exchange Argument

Lecture 5: February 16, 2012

Solutions to Sample Problems for Midterm

Geometric Steiner Trees

Lecture 3: AC 0, the switching lemma

CS 395T Computational Learning Theory. Scribe: Mike Halcrow. x 4. x 2. x 6

Reckhow s Theorem. Yuval Filmus. November 2010

Homework 4 Solutions

CS Introduction to Complexity Theory. Lecture #11: Dec 8th, 2015

Basing Decisions on Sentences in Decision Diagrams

Lecture 10 Algorithmic version of the local lemma

Proof Techniques (Review of Math 271)

Tecniche di Verifica. Introduction to Propositional Logic

CS 151 Complexity Theory Spring Solution Set 5

CSE 3500 Algorithms and Complexity Fall 2016 Lecture 25: November 29, 2016

Essential facts about NP-completeness:

Lecture #14: NP-Completeness (Chapter 34 Old Edition Chapter 36) Discussion here is from the old edition.

Lecture 16: Communication Complexity

Resolution Lower Bounds for the Weak Pigeonhole Principle

Supplementary exercises in propositional logic

Regular Resolution Lower Bounds for the Weak Pigeonhole Principle

CS 151. Red Black Trees & Structural Induction. Thursday, November 1, 12

Computational Logic. Davide Martinenghi. Spring Free University of Bozen-Bolzano. Computational Logic Davide Martinenghi (1/30)

Models of Computation,

Binary Decision Diagrams

1.1 P, NP, and NP-complete

Increasing the Span of Stars

Notes for Lecture 11

Algorithms and Data Structures 2016 Week 5 solutions (Tues 9th - Fri 12th February)

CMPUT 675: Approximation Algorithms Fall 2014

Math 262A Lecture Notes - Nechiporuk s Theorem

Lecture 10: Learning DNF, AC 0, Juntas. 1 Learning DNF in Almost Polynomial Time

Resolution and the Weak Pigeonhole Principle

Lecture 5: February 21, 2017

This lecture covers Chapter 7 of HMU: Properties of CFLs

Allocation of multiple processors to lazy boolean function trees justification of the magic number 2/3

CS156: The Calculus of Computation

Lecture 17: Trees and Merge Sort 10:00 AM, Oct 15, 2018

NP-Hardness reductions

1. Prove: A full m- ary tree with i internal vertices contains n = mi + 1 vertices.

On converting CNF to DNF

Show that the following problems are NP-complete

Introduction. Pvs.NPExample

Properties of Context-Free Languages

Binary Decision Diagrams. Graphs. Boolean Functions

Nearest Neighbor Search with Keywords

NP Completeness. CS 374: Algorithms & Models of Computation, Spring Lecture 23. November 19, 2015

In-Class Soln 1. CS 361, Lecture 4. Today s Outline. In-Class Soln 2

Lecture 12: Interactive Proofs

Clause/Term Resolution and Learning in the Evaluation of Quantified Boolean Formulas

Limitations of Algorithm Power

Guest Lecture: Average Case depth hierarchy for,, -Boolean circuits. Boaz Barak

Notes on induction proofs and recursive definitions

Lower Bounds for Bounded Depth Frege Proofs via Buss-Pudlák Games

Robin Moser makes Lovász Local Lemma Algorithmic! Notes of Joel Spencer

A An Overview of Complexity Theory for the Algorithm Designer

Truth-Functional Logic

More on NP and Reductions

On the Isomorphism Problem for Decision Trees and Decision Lists

Propositional and First Order Reasoning

Computational Learning Theory - Hilary Term : Introduction to the PAC Learning Framework

Distributed Systems Byzantine Agreement

Lecture 7: The Polynomial-Time Hierarchy. 1 Nondeterministic Space is Closed under Complement

CS1800: Hex & Logic. Professor Kevin Gold

Notes for Lecture 21

A Switching Lemma Tutorial. Benjamin Rossman University of Toronto and NII, Tokyo

Lecture 16. Error-free variable length schemes (contd.): Shannon-Fano-Elias code, Huffman code

Deductive Systems. Lecture - 3

1 Circuit Complexity. CS 6743 Lecture 15 1 Fall Definitions

Chapter 2. Reductions and NP. 2.1 Reductions Continued The Satisfiability Problem (SAT) SAT 3SAT. CS 573: Algorithms, Fall 2013 August 29, 2013

1 Propositional Logic

CS5371 Theory of Computation. Lecture 9: Automata Theory VII (Pumping Lemma, Non-CFL)

Compiling Knowledge into Decomposable Negation Normal Form

Part V. Intractable Problems

Compute the Fourier transform on the first register to get x {0,1} n x 0.

Binary Decision Diagrams

INAPPROX APPROX PTAS. FPTAS Knapsack P

UC Berkeley CS 170: Efficient Algorithms and Intractable Problems Handout 22 Lecturer: David Wagner April 24, Notes 22 for CS 170

1 Efficient Transformation to CNF Formulas

1 Introduction (January 21)

Umans Complexity Theory Lectures

CS 375/CS M75 Logic for Computer Science

Binary Decision Diagrams

Lecture 4: LMN Learning (Part 2)

Analysis of Algorithms. Outline 1 Introduction Basic Definitions Ordered Trees. Fibonacci Heaps. Andres Mendez-Vazquez. October 29, Notes.

1 The Low-Degree Testing Assumption

Reduced Ordered Binary Decision Diagrams

arxiv: v1 [cs.cc] 22 Aug 2018

2.5.2 Basic CNF/DNF Transformation

Transcription:

Lecture 2: oof of Switching Lemma Yuan Li June 25, 2014 1 Decision Tree Instead of bounding the probability that f can be written as some s-dnf, we estimate the probability that f can be computed by a decision tree of height s, which is a stronger statement. Definition 1.1. A decision tree is a binary tree, where each node has exactly two children. Each non-leaf node is associated with one variable x i, and the left child corresponds to x i = 0, the right corresponds to x i = 1; and each leaf has a number, either 0 or 1, which represents the output in the path (from root). The height of the decision tree is the length of the longest path from root to some leaf. Every decision tree computes a Boolean function. We leave it as an exercise to prove the following. Exercise 1.2. Every decision tree of height s can be written as an s-dnf or s-cnf. The converse is not true. Given some CNF (or DNF similarly), there is a natural way to draw the decision tree, that is, start from the first clause C, query the first variable in this clause, if C becomes a constant, move to the next clause, otherwise keep querying free variables. It is called canonical decision tree, defined as follows. Definition 1.3 (Canonical Decision Tree). Given some CNF f, first, fix an order of all clauses and variables, say f = C 1 C 2... C m. Pick the first variable in C 1, say x 1, as the root of the canonical decision tree. In the left child of the root, x 1 = 0, 1

if C 1 x1 =0 = 1, then move to the next free variable in C 1, and repeat the same argument. That is, repeat the same argument to C 2... C m to build the subtree; if C 1 x1 =0 = 0, then f = 0. Thus terminate, and give the leaf a label 0; if the value of C 1 x1 =0 is undetermined, i.e., x 1 (not its negation) appears in C 1, repeat the same argument to the next literal in C 1. For the right child of x 1, it is similar. Suppose we have already restricted all the free variable up to clause C i, variable x j, where restriction : x 1,... x j 1 {0, 1, } corresponds to the path in our decision tree. We are constructing the subtree under as follows (as we did to the root x 1 ). if C i {xj =0} = 1, then move to the next clause, and repeat the same argument. That is, apply the same argument to C j+1... C m to build the subtree; if C i {xj =0} = 0, then f = 0. Thus terminate, and give the leaf a label 0. if the value of C i {xj =0} is undetermined, which implies x j (not its negation) appears in C i, repeat the same argument to the next literal in C i. Finally, we will exhaust all clauses and all free variables. It is easy to see the decision tree we have constructed computes the function f. Given Boolean function f, let DT c (f) denotes the height of the canonical decision tree, where the subscript c emphasizes that it is the canonical decision tree, which in general, may not be the shortest decision tree computing f. In order to prove Håstad s Switching Lemma, we prove a stronger statement. Lemma 1.4 (Håstad s Switching Lemma, decision tree version). Let f be any t-cnf. Then [DT c (f ) > s] < (5pt) s, where R(p, 1/2), and DT c (f ) denotes the height of the canonical decision tree computing f. 2

The proof presented in the next section is slightly different from Håstad s original proof, which goes through the following stronger statement, which says, for any t-cnf, [DT c (f ) > s g 0] < (5pt) s, where g is any Boolean function. It is a generally useful trick to make the proof easier by proving something stronger. However, we find this trick unnecessary here, and we will be able to prove it directly by induction on the number of clauses. 2 Håstad s oof oof. We are proving Lemma 1.4 by induction on the number of clauses. Write t-cnf f as f = C 1 C 2... C m. Instead of proving by induction, we try to unveil the discovery of the proof, that is to write down and estimate a recursive formulae. For this purpose, let T (s, t, m) := sup [DT c (f ) > s], f R(p,1/2) where the supremum is taking over all t-cnf with at most m clauses in any number of variables. The goal is to prove T (s, t, m) < (5pt) s. (1) If m = 1, (1) is trivially true, because C 1 is automatically s-dnf for any s 1. If m > 1, assuming Lemma 1.4 is true for any t-cnf f with number of clauses < m, we shall prove it for any t-cnf f with m clauses. Without loss of generality, assume C 1 = x 1 x 2... x t. (If some literal is a negation in C 1, then negate all instances of it.) Think of R(p, 1/2) as a composition of two restrictions = 1 2, where, 1 : {x 1,..., x t } {0, 1, }, 2 : {x t+1,..., x n } {0, 1, }, 3

and 1, 2 R(p, 1/2) as well. For convenience, let g = C 2... C m, and thus f = C 1 g, and f = C 1 g = C 1 1 g 1 2. Now let us estimate the height of the canonical decision tree of C 1 1 g 1 2 case by case. Recall the definition of canonical decision tree: we are constructing clause by clause, literal by literal. In this case, we start with C 1 1. The situation is divided into 2 cases. Case 1: there exists some i [t], 1 (x i ) = 1, and thus C 1 1 = 1. Recall the definition of canonical decision tree, in which we simply move to the next clause, since C 1 1 1. Let us denote this event by A. It is easy to verify and [A] = 1 ( 1 + p 2 )t [DT c (f ) > s A] T (s, t, m 1), since f = g, which is a t-cnf with m 1 clauses. Case 2: 1 (x i ) {0, } for all i [t]. Let L := 1 1 ( ). Denote this event by B L, where L [t]. The union of B L for all L [t] is the complement of A. It is easy to see [B L ] = p l ( 1 p 2 )t l, where l := L. Recall the construction of canonical decision tree, where we are querying free variables in L one by one until C 1 becomes a constant, then either move to g = C 2... C m or terminate (depending on whether C 1 becomes 1 or 0). When querying free variables in L, there are 2 l results in total, which one-to-one corresponds to restrictions (mappings) σ : {x i : i L} {0, 1}. If σ 0, we simply terminate because C 1 becomes 0 and thus f becomes 0. Otherwise, we construct the canonical decision tree for g 1 σ 2, and append it to σ, where σ represents a path in the decision tree. 4

The event DT c (f ) > s implies that there exists some σ 0 such that DT (g 1 σ 2 ) > s l, for which we can apply a union bound, i.e, [DT c (f ) > s B L ] (2 l 1)T (s l, t, m 1). Putting two cases together, [DT c (f ) > s] = [A] [DT c (f ) > s A] + [B L ] [DT c (f ) > s l B L ] L [t] = (1 ( 1 + p 2 )t )T (s, t, m 1) + p l ( 1 p 2 )t l (2 l 1)T (s l, t, m 1). L [t] l:= L The remaning is calculation. Instead of plugging in (5pt) s, we put in (c H pt) s, where the constant c H is to be optimized later. It suffices to prove, there exists some c H such that (c H pt) s (1 ( 1 + p 2 )t )(c H pt) s + L p l ( 1 p 2 )t l (2 l 1)(c H pt) s l holds for any integer s, t 1. Dividing both sides by (c H pt) s, 1 (1 ( 1 + p 2 )t ) + L p l ( 1 p 2 )t l (2 l 1)(c H pt) l ( 1 + p 2 )t L p l ( 1 p 2 )t l (2 l 1)(c H pt) l. Dividing both sides by ((1 p)/2) t, ( 1 + p 1 p )t p l ( 1 p 2 ) l (2 l 1)(c H pt) l L = ( ) 4 2 ( c H t(1 p) )l ( c H t(1 p) )l L 4 = ( c H t(1 p) + 2 1)t ( c H t(1 p) + 1)t ( 4 c H t + 1)t ( 2 c H t + 1)t 5

where the second last step is by binomial theorem. It is not difficult to check that if c H = 5, ( 4 + c H t 1)t ( 2 + c H t 1)t < 1 for any t > 0, which finishes our proof. In many applications of switching lemma, t = s and p is taking to be Θ(1/s), in which case we can get better constant c H. The following qualitatively stronger version of Håstad s Switching Lemma can be easily extracted from the last step of our proof. Lemma 2.1 (Håstad s Switching Lemma, improved constant). Let f be any t-cnf. Then [DT c (f ) > s] < (c H pt) s, where R(p, 1/2), and DT c (f ) denotes the height of the canonical decision tree computing f, and c H is any constant such that ( 1 + p 1 p )t ( 4 c H t + 1)t ( 2 c H t + 1)t. Exercise 2.2. Use the above lemma to improve the multiplicative constant in the 2 Ω(n1/(d 1)) lower bound for P ARIT Y n, where d is the depth as usual. Remark 2.3. The improved constant version appears in Håstad s dissertation, which is due to Ravi Boppana. References [1] Paul Beame, A switching lemma primer. Techenical Report, Department of Computer Science and Engineering, University of Washington, Seattle, 1994. [2] Johan Håstad, Computational limitations for small depth circuits. Ph.D. Thesis, 1986. [3] Alexander A. Razborov, An equivalence between second order bounded domain bounded arithmetic and first order bounded arithmetic. In P. Clote and J. Krajicek, editors, Arithmetic, oof Theory and Computation Complexity, pages 247-277. Oxford University ess, 1993. 6