Time-Space Tradeoffs for SAT

Similar documents
Review of Basic Computational Complexity

Better Time-Space Lower Bounds for SAT and Related Problems

Lecture 16: Time Complexity and P vs NP

The Polynomial Hierarchy

Lecture 6: Oracle TMs, Diagonalization Limits, Space Complexity

Lecture 9: Polynomial-Time Hierarchy, Time-Space Tradeoffs

Lecture 21: Space Complexity (The Final Exam Frontier?)

Finish K-Complexity, Start Time Complexity

COMPLEXITY THEORY. PSPACE = SPACE(n k ) k N. NPSPACE = NSPACE(n k ) 10/30/2012. Space Complexity: Savitch's Theorem and PSPACE- Completeness

Time Complexity. Definition. Let t : n n be a function. NTIME(t(n)) = {L L is a language decidable by a O(t(n)) deterministic TM}

Time Complexity. CS60001: Foundations of Computing Science

Chapter 7: Time Complexity

Lecture 2 (Notes) 1. The book Computational Complexity: A Modern Approach by Sanjeev Arora and Boaz Barak;

CS154, Lecture 17: conp, Oracles again, Space Complexity

Lecture 22: PSPACE

Polynomial Hierarchy

Algorithms as Lower Bounds

Lecture 3: Reductions and Completeness

CSCI 1590 Intro to Computational Complexity

Notes for Lecture 3... x 4

DRAFT. Diagonalization. Chapter 4

Theory of Computation

TIME COMPLEXITY AND POLYNOMIAL TIME; NON DETERMINISTIC TURING MACHINES AND NP. THURSDAY Mar 20

Could we potentially place A in a smaller complexity class if we consider other computational models?

MTAT Complexity Theory October 20th-21st, Lecture 7

CS154, Lecture 13: P vs NP

Space and Nondeterminism

Announcements. Problem Set 7 graded; will be returned at end of lecture. Unclaimed problem sets and midterms moved!

Computational Complexity: A Modern Approach. Draft of a book: Dated January 2007 Comments welcome!

Lecture 8: Alternatation. 1 Alternate Characterizations of the Polynomial Hierarchy

: On the P vs. BPP problem. 30/12/2016 Lecture 12

Space is a computation resource. Unlike time it can be reused. Computational Complexity, by Fu Yuxi Space Complexity 1 / 44

CS154, Lecture 13: P vs NP

Advanced Topics in Theoretical Computer Science

Time Complexity (1) CSCI Spring Original Slides were written by Dr. Frederick W Maier. CSCI 2670 Time Complexity (1)

FORMAL LANGUAGES, AUTOMATA AND COMPUTABILITY. FLAC (15-453) Spring l. Blum TIME COMPLEXITY AND POLYNOMIAL TIME;

Time-bounded computations

Lecture 3: Nondeterminism, NP, and NP-completeness

1 Circuit Complexity. CS 6743 Lecture 15 1 Fall Definitions

Notes on Complexity Theory Last updated: October, Lecture 6

Harvard CS 121 and CSCI E-121 Lecture 20: Polynomial Time

P vs. NP Classes. Prof. (Dr.) K.R. Chowdhary.

Space Complexity. The space complexity of a program is how much memory it uses.

Computational complexity

2 P vs. NP and Diagonalization

Lecture 2: Tape reduction and Time Hierarchy

U.C. Berkeley CS278: Computational Complexity Professor Luca Trevisan August 30, Notes for Lecture 1

FORMAL LANGUAGES, AUTOMATA AND COMPUTABILITY

Umans Complexity Theory Lectures

Computability and Complexity CISC462, Fall 2018, Space complexity 1

Theory of Computation Time Complexity

Time-Space Tradeoffs for Counting NP Solutions Modulo Integers

satisfiability (sat) Satisfiability unsatisfiability (unsat or sat complement) and validity Any Expression φ Can Be Converted into CNFs and DNFs

Non-Deterministic Time

Polynomial Time Computation. Topics in Logic and Complexity Handout 2. Nondeterministic Polynomial Time. Succinct Certificates.

Complexity (Pre Lecture)

Limits of Feasibility. Example. Complexity Relationships among Models. 1. Complexity Relationships among Models

Lecture 3. 1 Terminology. 2 Non-Deterministic Space Complexity. Notes on Complexity Theory: Fall 2005 Last updated: September, 2005.

1 PSPACE-Completeness

Complete problems for classes in PH, The Polynomial-Time Hierarchy (PH) oracle is like a subroutine, or function in

Outline. Complexity Theory. Example. Sketch of a log-space TM for palindromes. Log-space computations. Example VU , SS 2018

NP, polynomial-time mapping reductions, and NP-completeness

P = k T IME(n k ) Now, do all decidable languages belong to P? Let s consider a couple of languages:

De Morgan s a Laws. De Morgan s laws say that. (φ 1 φ 2 ) = φ 1 φ 2, (φ 1 φ 2 ) = φ 1 φ 2.

CISC 4090 Theory of Computation

Announcements. Friday Four Square! Problem Set 8 due right now. Problem Set 9 out, due next Friday at 2:15PM. Did you lose a phone in my office?

CS Lecture 29 P, NP, and NP-Completeness. k ) for all k. Fall The class P. The class NP

Notes on Complexity Theory Last updated: December, Lecture 2

Time-Space Lower Bounds for the Polynomial-Time Hierarchy on Randomized Machines

U.C. Berkeley CS278: Computational Complexity Professor Luca Trevisan 9/6/2004. Notes for Lecture 3

Computational Complexity Theory

Definition: conp = { L L NP } What does a conp computation look like?

CSE200: Computability and complexity Space Complexity

Hierarchy theorems. Evgenij Thorstensen V18. Evgenij Thorstensen Hierarchy theorems V18 1 / 18

Notes on Space-Bounded Complexity

CS154, Lecture 15: Cook-Levin Theorem SAT, 3SAT

CS151 Complexity Theory. Lecture 1 April 3, 2017

Lecture 7: The Polynomial-Time Hierarchy. 1 Nondeterministic Space is Closed under Complement

Notes for Lecture 3... x 4

Lecture 10: Boolean Circuits (cont.)

Algorithms & Complexity II Avarikioti Zeta

CSE 105 THEORY OF COMPUTATION

Lecture 17: Cook-Levin Theorem, NP-Complete Problems

Chapter 1 - Time and Space Complexity. deterministic and non-deterministic Turing machine time and space complexity classes P, NP, PSPACE, NPSPACE

Complexity Theory VU , SS The Polynomial Hierarchy. Reinhard Pichler

Outline. Complexity Theory EXACT TSP. The Class DP. Definition. Problem EXACT TSP. Complexity of EXACT TSP. Proposition VU 181.

CSE 105 THEORY OF COMPUTATION

conp, Oracles, Space Complexity

Notes on Space-Bounded Complexity

Advanced topic: Space complexity

Essential facts about NP-completeness:

Ma/CS 117c Handout # 5 P vs. NP

Complexity: moving from qualitative to quantitative considerations

CSC 5170: Theory of Computational Complexity Lecture 5 The Chinese University of Hong Kong 8 February 2010

CS Lecture 28 P, NP, and NP-Completeness. Fall 2008

Alternation Trading Proofs and Their Limitations

1 Deterministic Turing Machines

Notes on Computer Theory Last updated: November, Circuits

Nondeterministic Circuit Lower Bounds from Mildly Derandomizing Arthur-Merlin Games

Transcription:

Lecture 8 Time-Space Tradeoffs for SAT April 22, 2004 Lecturer: Paul Beame Notes: Definition 8.1. TIMESPACE(T (n), S(n)) = {L {0, 1} offline, multitape TM M such that L = L(M) and M uses time O(T (n)) and space O(S(n)) }. One should note that, while TIMESPACE(T (n), S(n)) TIME(T (n)) SPACE(S(n)), in general the definition is different from TIME(T (n)) SPACE(S(n)), since a single machine is required to run in both time O(T (n)) and space O(S(n)). We will prove a variant of the following theorem which is the strongest form of time-space tradeoffs currently known. The first result of this sort was shown by Fortnow; on the way to showing bounds like these we prove bounds for smaller running times due to Lipton and Viglas. Theorem 8.1 (Fortnow-van Melkebeek). Let φ = ( 5 + 1)/2. There is a function s : R + R + s.t. for all c < φ, SAT TIMESPACE(n c, n s(c) ) and lim c 1 s(c) = 1. We will prove a weaker form of this theorem that for c < φ, SAT TIMESPACE(n c, n o(1) ). By the efficient reduction described in Lemma 5.4 of NTIME(T (n)) to 3-SAT, which maps an NTIME(T (n)) machine to a 3CNF formula of size O(T (n) log T (n)) in time O(T (n) log T (n)) time and O(log T (n)) space, to prove this weaker form it suffices to show the following. Theorem 8.2. For c < ( 5 + 1)/2, NTIME(n) TIMESPACE(n c, n o(1) ). One simple and old idea we will use is that of padding which shows that if simulations at low complexity levels exist then simulations at high complexity levels also exist. Lemma 8.3 (Padding Meta-lemma). Suppose that C(n) C (f(n)) for parameterized complexity class families C and C and some function f : N R +. C, C parameterized by n and f(n) respectively. Then for any t(n) n such that is computable using the resources allowed for C(t(n)), C(t(n)) C (f(t(n))). Proof. Let L C(t(n)) and let M be a machine deciding L that witnesses this fact. Let x {0, 1} n be input for M. Pad x to length t(n), say to creat x pad = x01 t(n) n 1. For A C(t(n)) let A pad = {x01 t( x ) x 1 x A }. Since it is very easy to strip off the padding, A pad C(n); so, by assumption, there is a witnessing machine M showing that A pad C (f(n)). We use M as follows: On input x, create x pad and run M on x pad. The resources used are those of C (f(n)) as required. Theorem 8.4 (Seiferas-Fischer-Meyer). If f, g : N N, f, g are time constructible, and f(n + 1) is o(g(n)) then NTIME(f(n)) NTIME(g(n)). Terri Moore 40

LECTURE 8. TIME-SPACE TRADEOFFS FOR SAT 41 Note that, unlike the case for the deterministic time hierarchy, there is no multiplicative O(log T (n)) factor in the separation. In this sense, it is stronger at low complexity levels. However for very quickly growing functions (for example double exponential), the condition involving the +1 is weaker than a multiplicative log T (n). When we use it we will need to ensure that the time bounds we consider are fairly small. Proof Sketch. The general idea of the simulation begins in a similar way to the ordinary time hierarchy theorem: One adds a clock and uses a fixed simulation to create a universal Turing machine for all machines running in time bound f(n). In this case, unlike th deterministic case we can simulate an arbitrary k-tape NTM by a 2-tape NTM with no more than a constant factor simulation loss. This allows one to add the clock with no complexity loss at all. The hard part is to execute the complementation part of the diagonalization argument and this is done by a sophisticated padding argument that causes the additional +1. It uses the fact that unary languages of arbitrarily high NTIME complexity exist. We will use the following outline in the proof of Theorem 8.2. 1. Assume NTIME(n) TIMESPACE(n c, n o(1) ) for some constant c. 2. Show that for some (not too large) t(n) that is time computable, NTIME(n) TIME(n c ) = TIMESPACE(t(n), t(n) o(1) ) NTIME(t(n) f(c)+o(1) ) for some constant f(c) > 0. 3. For T (n) = t(n) 1/c where t(n) is given in part 2, put the two parts together derive NTIME(T (n)) TIMESPACE(T (n) c, T (n) o(1) ) from part 1 by padding Lemma 8.3 NTIME(T (n) cf(c)+o(1) ) by part 2. 4. If cf(c) < 1, this is a contradiction to the NTIME hierarchy given in Theorem 8.4. Part 2 of this outline requires the most technical work. The key notions in the proof involve extensions of the ideas behind Savitch s Theorem and for this it is convenient to use alternating time complexity classes. Definition 8.2. Define Σ k TIME(T (n)) to be the set of all languages L such that x L iff T ( x ) y 1 T ( x ) y 2... Q T ( x ) y k M(x, y 1,..., y k ) where M runs for at most O(T ( x )) steps on input (x, y 1,..., y n ). Define Π k TIME(T (n)) similarly. Lemma 8.5. For S(n) log n and any integer function b : N N, TIMESPACE(T (n), S(n)) Σ 2 TIME(T (n)) where T (n) = b(n) S(n) + T (n)/b(n) + log b(n). Proof. Recall from the proof of Savitch s Theorem that we can consider the computation as operating on the graph of TM configurations. For configurations C and C we write C t C if and only if configuration C yields C in at most t steps. In the proof of Savitch s theorem we used the fact that we could assume a fixed form for the initial configuration C 0 and a unique accepting configuration C f, and expressed (C 0 T C f ) C m. ((C 0 T/2 C m ) (C m T/2 C f )). In a similar way we can break up the computation into b = b(n) pieces for any b 2, so that, denoting C f by C b, we derive (C 0 T C b ) (b 1)S C 1, C 2,..., C b 1 log b i. (C i 1 T/b C i ). Each configuration has size O(S(n) + log n) = O(S(n)) and determining whether or not C i 1 t/b C i requires time at most O(T (n)/b(n) + S(n)). Thus the computation overall is in Σ 2 TIME(b(n) S(n) + T (n)/b(n) + log b(n)).

LECTURE 8. TIME-SPACE TRADEOFFS FOR SAT 42 Corollary 8.6. For all T, S : N R +, TIMESPACE(T (n), S(n)) Σ 2 TIME( T (n)s(n)). Proof. Apply Lemma 8.5 with b(n) = T (n) S(n). Given a simple assumption about the simulation of nondeterministic time by deterministic time we see that we can remove an alternations from the computation. Lemma 8.7. If T (n) n is time constructible and NTIME(n) TIME(T (n)), then for time constructible T (n) n, Σ 2 TIME(T (n)) NTIME(T (T (n))). Proof. By definition, If L Σ 2 TIME(T (n)) then there is some predicate R such that x L T ( x ) y 1 T ( x ) y 2 R(x, y 1, y 2 ) and R(x, y 1, y 2 ) is computable in time O(T ( x )). Therefore x L T ( x ) y 1 T ( x ) y 2 R(x, y 1, y 2 ). By padding using the assumption that NTIME(n) TIME(T (n)), we obtain NTIME(T (n)) TIME(T (T (n))) and thus the set S = {(x, y 1 ) y 1 T ( x ) and T ( x ) y 2 R(x, y 1, y 2 )} is in TIME(T (T (n))). Since x L if and only if T ( x ) y 1 ((x, y 1 ) S), it follows that as required. L NTIME(T (n) + T (T (n))) = NTIME(T (T (n))) Note that the assumption in Lemma 8.7 can be weakened to NTIME(n) contime(t (n)) and the argument will still go through. We now obtain a simple version of Part 2 of our basic outline for the proof of Theorem 8.2. Corollary 8.8. If NTIME(n) TIME(n c ) then for t(n) n 2, TIMESPACE(t(n), t(n) o(1) ) NTIME(t(n) c/2+o(1) ). Proof. By Corollary 8.6, TIMESPACE(t(n), t(n) o(1) ) Σ 2 TIME(t(n) 1/2+o(1) ). Applying Lemma 8.7 with T (n) = t(n) 1/2+o(1) n yields as required since (t(n) o(1) ) c is t(n) o(1). Σ 2 TIME(t(n) 1/2+o(1) ) NTIME(t(n) c/2+o(1) ) Corollary 8.9 (Lipton-Viglas). NTIME(n) TIMESPACE(n c, n o(1) ) for c < 2.

LECTURE 8. TIME-SPACE TRADEOFFS FOR SAT 43 Proof. Let t(n) be as defined in Corollary 8.8 and let T (n) = t(n) 1/c. If NTIME(n) TIMESPACE(n c, n o(1) ) then NTIME(T (n)) TIMESPACE(T (n) c, T (n) o(1) ) = TIMESPACE(t(n), t(n) o(1) ) NTIME(t(n) c/2+o(1) ) by Corollary 8.8 = NTIME(T (n) c2 /2+o(1) ). Since c < 2, c 2 /2 < 1 and this yields a contradiction to the nondeterministic time hierarchy Theorem 8.4. Fortnow and van Melkebeek derived a slightly different form from Lemma 8.5 for alternating time simulation of TIMESPACE(T (n), S(n)) using the following idea. For a deterministic Turing machine running for T steps we know that C 0 T C b if and only for all C b C b, C 0 T C b. Furthermore Therefore (C 0 T C b ) bs C 1, C 2,..., C b 1, C b log b i. ((C b = C b) (C i 1 T/b C i)). Strictly speaking this construction would allow one to derive the following lemma. Lemma 8.10. For S(n) log n and any integer function b : N N, TIMESPACE(T (n), S(n)) Π 2 TIME(T (n)) where T (n) = b(n) S(n) + T (n)/b(n) + log b(n). This is no better than Lemma 8.5 but we will not use the idea in this simple form. The key is that we will be able to save because we have expressed C 0 T C b in terms of C i 1 T/b C i. This will allow us to have fewer alternations when the construction is applied recursively since the C 1,... quantifiers that will occur at the next level can be combined with the i quantifier at the current level. This is the idea behind the proof of Theorem 8.2. Proof of Theorem 8.2. We first prove the following by induction on k. Let f k be defined by f k+1 (c) = c f k (c)/(1 + f k (c)) and f 1 (c) = c/2. Claim 1. If NTIME(n) TIME(n c ) then for k 1 and some not too large t(n), TIMESPACE(t(n), t(n) o(1) ) NTIME(t(n) f k(c)+o(1) ). Proof of Claim. The base case k = 1 is precisely Corollary 8.8. Suppose that the claim is true for k. Suppose that we have a machine M witness a language L in TIMESPACE(t(n), t(n) o(1) ). We apply the following expansion. (C 0 t C b ) bs C 1, C 2,..., C b 1, C b log b i. ((C b = C b) (C i 1 t/b C i )). Choose b(n) = t(n) f k(c)/(1+f k (c). Then t(n)/b(n) = t(n) 1/(f k(c)+1) and b(n)s(n) = t(n) f k(c)/(1+f k (c))+o(1). Since f k (c) f 1 (c) = c/2 < 1, b(n)s(n) t(n)/b(n) so the computation time for the expansion is dominated by t(n)/b(n) = t(n) 1/(fk(c)+1). By the inductive hypothesis applied to the inner C i 1 t/b C i we obtain that for some not too large t(n)/b(n) this computation can be done in NTIME([t(n)/b(n)] f k(c)+o(1) ) = NTIME(t(n) f k(c)/(f k (c)+1)+o(1) ).

LECTURE 8. TIME-SPACE TRADEOFFS FOR SAT 44 Adding the log b i also keeps it in the same complexity class. By padding and the hypothesis that NTIME(n) TIME(n c ) we obtain that the inner computation log b i. ((C b = C b) (C i 1 t/b C i )) can be done in TIME(t(n) c f k(c)/(f k (c)+1)+o(1) ). Plugging this in we obtain that TIMESPACE(t(n), t(n) o(1) ) contime(t(n) c f k(c)/(f k (c)+1)+o(1) ). Since TIMESPACE(t(n), t(n) o(1) ) is closed under complement and by the definition of f k+1 (c) we obtain that TIMESPACE(t(n), t(n) o(1) ) NTIME(t(n) f k+1(c)+o(1) ) as required. Applying the end of the proof outline we obtain that for any k, if NTIME(n) TIMESPACE(n c, n o(1) ) then for T (n) = t(n) 1/f k(c), NTIME(T (n)) TIMESPACE(T (n) c, T (n) o(1) ) = TIMESPACE(t(n), t(n) o(1) ) NTIME(t(n) f k(c)+o(1) ) = NTIME(T (n) c f k(c)+o(1) ). Observe that f k (c) is a monotonically decreasing function with fixed point f (c) = c f (c)/(f (c) + 1) when f (c) = c 1. Then c f (c) = c(c 1) < 1 when c < φ = ( 5+1)/2 which provides a contradiction to the nondeterministic time hierarchy theorem. Open Problem 8.1. Prove that NTIME(n) TIMESPACE(n c, n o(1) ) for some c > ( 5 + 1)/2, for example c = 2.