CS6840: Advanced Complexity Theory Mar 29, Lecturer: Jayalal Sarma M.N. Scribe: Dinesh K.

Similar documents
Lecture 2: Proof of Switching Lemma

CSC 2429 Approaches to the P vs. NP Question and Related Complexity Questions Lecture 2: Switching Lemma, AC 0 Circuit Lower Bounds

Lecture 4 : Quest for Structure in Counting Problems

Lecture 3: AC 0, the switching lemma

1 AC 0 and Håstad Switching Lemma

arxiv: v1 [cs.cc] 22 Aug 2018

Notes for Lecture 25

On the correlation of parity and small-depth circuits

Lecture 10: Learning DNF, AC 0, Juntas. 1 Learning DNF in Almost Polynomial Time

Lecture 5: February 21, 2017

CS 395T Computational Learning Theory. Scribe: Mike Halcrow. x 4. x 2. x 6

Lecture 4: LMN Learning (Part 2)

Notes for Lecture 11

Limits to Approximability: When Algorithms Won't Help You. Note: Contents of today s lecture won t be on the exam

Lecture 15 - NP Completeness 1

Lecture #14: NP-Completeness (Chapter 34 Old Edition Chapter 36) Discussion here is from the old edition.

6.842 Randomness and Computation April 2, Lecture 14

CS 151 Complexity Theory Spring Solution Set 5

CS151 Complexity Theory

Computational Learning Theory - Hilary Term : Introduction to the PAC Learning Framework

Binary Decision Diagrams. Graphs. Boolean Functions

A brief introduction to Logic. (slides from

Lecture 29: Computational Learning Theory

Notes on Computer Theory Last updated: November, Circuits

CS Foundations of Communication Complexity

CS Communication Complexity: Applications and New Directions

Binary Decision Diagrams

Limitations of Algorithm Power

CMPUT 675: Approximation Algorithms Fall 2014

NP-Completeness Review

Circuits. Lecture 11 Uniform Circuit Complexity

Lecture 11: Proofs, Games, and Alternation

: On the P vs. BPP problem. 18/12/16 Lecture 10

Notes for Lecture 21

INSTITUTE OF MATHEMATICS THE CZECH ACADEMY OF SCIENCES. A trade-off between length and width in resolution. Neil Thapen

Classes of Boolean Functions

Lecture 4. 1 Circuit Complexity. Notes on Complexity Theory: Fall 2005 Last updated: September, Jonathan Katz

Two Sides of the Coin Problem

UC Berkeley CS 170: Efficient Algorithms and Intractable Problems Handout 22 Lecturer: David Wagner April 24, Notes 22 for CS 170

ITCS:CCT09 : Computational Complexity Theory Apr 8, Lecture 7

Geometric Steiner Trees

NP-Complete Reductions 2

Boolean circuits. Lecture Definitions

Lecture 7: Polynomial time hierarchy

Announcements. Friday Four Square! Problem Set 8 due right now. Problem Set 9 out, due next Friday at 2:15PM. Did you lose a phone in my office?

On converting CNF to DNF

Math 262A Lecture Notes - Nechiporuk s Theorem

NP-Complete Reductions 1

Size-Depth Tradeoffs for Boolean Formulae

Guest Lecture: Average Case depth hierarchy for,, -Boolean circuits. Boaz Barak

The Strength of Multilinear Proofs

P, NP, NP-Complete, and NPhard

Tecniche di Verifica. Introduction to Propositional Logic

1 Circuit Complexity. CS 6743 Lecture 15 1 Fall Definitions

CIRCUIT COMPLEXITY AND PROBLEM STRUCTURE IN HAMMING SPACE

CS21 Decidability and Tractability

Introduction to Interactive Proofs & The Sumcheck Protocol

Basing Decisions on Sentences in Decision Diagrams

1 Randomized Computation

2 Completing the Hardness of approximation of Set Cover

NP-Completeness Part II

CS21 Decidability and Tractability

Lecture 17. In this lecture, we will continue our discussion on randomization.

CS Introduction to Complexity Theory. Lecture #11: Dec 8th, 2015

CSC 5170: Theory of Computational Complexity Lecture 9 The Chinese University of Hong Kong 15 March 2010

2.5.2 Basic CNF/DNF Transformation

Stanford University CS254: Computational Complexity Handout 8 Luca Trevisan 4/21/2010

Notes for Lecture 2. Statement of the PCP Theorem and Constraint Satisfaction

CS154, Lecture 15: Cook-Levin Theorem SAT, 3SAT

CS151 Complexity Theory. Lecture 6 April 19, 2017

Algebraic Dynamic Programming. Solving Satisfiability with ADP

Polynomial Certificates for Propositional Classes

On the Complexity of the Reflected Logic of Proofs

Automata Theory CS Complexity Theory I: Polynomial Time

NP-Completeness and Boolean Satisfiability

Lower Bounds for Bounded Depth Frege Proofs via Buss-Pudlák Games

Essential facts about NP-completeness:

U.C. Berkeley CS278: Computational Complexity Professor Luca Trevisan August 30, Notes for Lecture 1

Regular Resolution Lower Bounds for the Weak Pigeonhole Principle

1 Propositional Logic

Lecture 8: Alternatation. 1 Alternate Characterizations of the Polynomial Hierarchy

Comp487/587 - Boolean Formulas

NP-COMPLETE PROBLEMS. 1. Characterizing NP. Proof

Lecture 15: A Brief Look at PCP

1 From previous lectures

Complete problems for classes in PH, The Polynomial-Time Hierarchy (PH) oracle is like a subroutine, or function in

Lecture 3: Nondeterminism, NP, and NP-completeness

NP-completeness. Chapter 34. Sergey Bereg

Lecture 12: Interactive Proofs

Intro to Theory of Computation

NP-Complete problems

Normal Forms of Propositional Logic

Exercises 1 - Solutions

Lecture 4: NP and computational intractability

Lecture 59 : Instance Compression and Succinct PCP s for NP

On the Isomorphism Problem for Decision Trees and Decision Lists

P P P NP-Hard: L is NP-hard if for all L NP, L L. Thus, if we could solve L in polynomial. Cook's Theorem and Reductions

COS597D: Information Theory in Computer Science October 5, Lecture 6

Lecture 19 : Reed-Muller, Concatenation Codes & Decoding problem

Lecture 18: P & NP. Revised, May 1, CLRS, pp

Transcription:

CS684: Advanced Complexity Theory Mar 29, 22 Lecture 46 : Size lower bounds for AC circuits computing Parity Lecturer: Jayalal Sarma M.N. Scribe: Dinesh K. Theme: Circuit Complexity Lecture Plan: Proof of the one of the first separation result in circuit complexity, P arity AC assuming Håstad s switching lemma In this lecture, we shall see an important separation result in circuit complexity, P arity AC. That is, any circuit of constant depth having unbounded fan-in computing P arity must have size exponential in the number of inputs. We shall state Håstad s Switching lemma and a resulting lemma that shall be used in proving this result. The lemmas shall be proved in the subsequent lectures. This result was first proved in 98 by Frust-Saxe-Sipser and independently by Ajtai. This proof was later simplified by John Håstad in 986 using conditional probabilities. We shall be seeing a simpler proof without using conditional probabilities due to Alexander Razbarov (987). Let us start of by defining the notion of a decision tree. Decision Tree Definition. A decision tree is a rooted binary tree with each internal node labelled with variables and leaves labelled with one of or. A decision tree can also be seen as a model of computation. Consider a function f : {, } n {, }. A decision tree can compute this function in the following manner: On input x, start with the root node v, check if the variable representing v say x i is or. If it is go to its left child, if it is go to its right child. The process is repeated till a leaf is reached and the label on the leaf will be value of f(x). When we move to a node with variable x i, we say that the variable x i has been queried. An example of a decision tree computing f(x, x 2, x 3 ) = (x x 2 x 3 ) is shown in figure. Let us now define the parameter of interest for a decision tree. One natural parameter to ask is how many queries are needed to decide the value of f for a given input x. This can be formalised as the depth of a decision tree. 46-

x x 2 x 3 x 3 Figure : Decision tree computing (x x 2 x 3 ). Decision Tree Depth vs Term Size Definition 2. (Depth) Depth of a decision tree computing a function f is defined as the length of the longest path (in terms of the number of nodes/variables) from root to leaf. It is denoted as T (f). It can be observed that decision trees of boolean function f is actually representing its truth table. This leads to the following observation. Observation 3. Given a decision tree, a (CNF) corresponding to it can be written down. Moreover the (CNF) will be unique. Once a decision tree is given one can construct a corresponding to the function in the following way: Look at the path that end in a leaf of label, write down the path as conjunction of terms of variables/its negation to form clauses, take an OR of all the clauses to get a. Since there is a unique path between any two vertices in a tree, and in particular between root and leaf, the resultant formula will be unique. A unique CNF can also be obtained in the similar way. What about getting a unique decision tree given a of f? This is not always possible. (For example we can have two decision trees computing (x x 2 ) ( x x 2 ) one with x appearing as the root and other with x 2 as the root.) Hence it is necessary to define a canonical representation of decision trees computing f. The idea is to give a fixed ordering for the literals appearing in each clause induced by the ordering of indices of the literals. This forms a total order and is unique which enforces the uniqueness of the canonical representation. 46-2

.2 Canonical Decision Tree Definition 4. Canonical decision tree of a boolean function f : {, } n {, } can be defined inductively as follows. Let c c 2... c m be the representation of f. Let the first non-trivial clause be c i. Let x i, x i2,..., x ir, where r term size of and i < i 2 <... i r (the canonical ordering). Construct a full binary tree (figure 2) with all nodes at level j getting the label x ij for j r. r x i rm x i2 x i2 x i3 x i3 x i3 x i3 x ir x ir Figure 2: Canonical Decision tree Note that some setting of the variable might have caused it it evaluate to. In that case, the assignment to x i, x i2,..., x ir is said to have trivialised the function. It can also be that this assignment simplifies some other clauses also. Now for all the assignments ρ that has not trivialised f, we look at f ρ and apply the same process. For the base case which is or, just attach a leaf node with appropriate label. Since each clause can have term size at most r and since there are m clauses, depth of the canonical decision tree can be at most rm..3 Decision trees for P arity Let us now ask the question 46-3

What is the decision tree depth required for P arity? Claim 5. Any decision tree computing P arity n must have a depth of n. Proof. (Adversarial argument) Suppose there exists a decision tree of depth < n computing P arity n. Hence there must be a variable not being queried in every path. Hence on inputs where the variable is flipped, output will be unaffected. But for P arity we know that every flip of the input variable must affect the output. Hence the decision tree is not correctly computing P arity n contradicting our assumption. 2 Exponential Size Lower Bounds for P arity Here is the statement of the theorem P arity AC. Theorem 6. If a circuit C with unbounded fan-in and constant depth d is computing P arity n then, size of C must be 2 Ω(n/(d )). Since all AC circuits are polynomial sized, P arity AC. Let us look at a high level picture of the theorem and its proof. How could one show that P arity AC? One way is to identify a property that P arity has which no circuit in AC has. Just now, we saw that P arity has long decision trees. But this property can easily be verified to be true for n and n functions by the same argument. So this property will not work directly. But if we look at a random restriction 2 ρ Rn l getting applied to n or n it can be seen that the chances that they do not get killed is really less ( ). But P arity function after a 2 n l restriction still remains as parity or its negation on the remaining input variables. It is this property that will help in separating P arity from AC. Let us now look at the following lemma by Håstad. Lemma 7. (Håstad s Switching lemma) Let F be a having term size r. Let l = pn, p 7. Let ρ Rl n. Denote T (F ρ ) as the canonical tree of F ρ. Then for any s >, Pr ρ R l n [ T (F ρ ) s] < (7pr) s The switching lemma captures how does the application of a random restriction simplify a. Note that the above theorem also holds for F expressed in CNF. The following lemma follows from Håstad s switching lemma. Note that the class of functions that require every decision tree computing f to have depth n are called Evasive functions. Watch out for more about these functions in our course presentations! 2 Recall R l n = {ρ : [n] {,, } No of variables left unassigned = l } 46-4

Lemma 8. Let f : {, } {, }, C be an AC circuit computing f of size S of depth d. Define height of a gate g in C to be the maximum number of AND or OR gates from g to any input. Now, for i d, let n i+ = n 4(4 log S) i If n i log S for i d, then there exists a ρ R n i n height i, T (g ρ ) log S such that for every gate g below That is, every gate below i can be computed by a decision tree of depth log S. 2. How does this shows that P arity AC? By lemma 8, we can get a proof for P arity AC. Here is the idea. We know that given a circuit C in AC, a random restriction is likely to simplify the circuit. By simplify we mean that the circuit after a restriction can be computed by a small depth decision tree. What lemma 8 guarantees is the existence of such a restriction ρ for level 3 i that sets all but n i variables. Here is how. Håstad s lemma after plugging in appropriate parameters guarantee that for a given gate g at level i, Hence by union bound, Equivalently, Pr ρ [Depth of T (g ρ ) log S] < S Pr ρ [ g : Depth of T (g ρ ) log S] < Pr ρ [ g : Depth of T (g ρ ) < log S] > Thus by a probabilistic method argument, there indeed exists a ρ such that for every gate at level i, g ρ has depth log S. Now, by repeated application of the such restrictions at each level, we simplify the unbounded fan-in gates by replacing them with small depth circuits. Without loss of generality we can always assume that the AND and OR gates of the AC circuit alternates (If not combine the gates of same type appearing at consecutive levels. This is always possible since we allow unbounded fan-in). At level i, we apply the lemma and get a decision tree of depth log S. We replace the gates by equivalent s. Say for the gates at level i, the parent of the gates are OR gates. Since will have a OR at the top, these two can be merged resulting in decrease in depth by. 3 We will be using level and height interchangeably. 46-5

i g i Figure 3: Application of Lemma 8 Suppose the parent gate is an AND what can be done? We replace it the CNF corresponding to the decision tree. Since, switching lemma holds for CNFs also, we can safely do this. Now again, there will be two AND gates appearing at consecutive levels which can be merged to get a decrease in height (Essentially we are switching between the two normal forms as we go up the levels). Applying the lemma d 2 times we end up in a decision tree of depth O(log S) that looks at n d variables. Existence of such a restriction ρ achieving this is also guaranteed. Now, plug in the back the equivalent circuit. Since F computes parity, F ρ that looks at n d variables will still be computing the parity function (or its negation). But we know that parity function requires a decision tree of depth n d. Hence, we have log S n d n log S 4(4 log S) d 2 giving us, S 2 4( n 4) /(d ) or S = 2 Ω(n(/(d )) proving our main theorem 6. 46-6