CSE200: Computability and complexity Space Complexity

Similar documents
Space is a computation resource. Unlike time it can be reused. Computational Complexity, by Fu Yuxi Space Complexity 1 / 44

U.C. Berkeley CS278: Computational Complexity Professor Luca Trevisan 9/6/2004. Notes for Lecture 3

Notes on Space-Bounded Complexity

Notes on Space-Bounded Complexity

CSC 5170: Theory of Computational Complexity Lecture 4 The Chinese University of Hong Kong 1 February 2010

Advanced topic: Space complexity

SOLUTION: SOLUTION: SOLUTION:

Space Complexity. The space complexity of a program is how much memory it uses.

Space and Nondeterminism

1 PSPACE-Completeness

Outline. Complexity Theory. Example. Sketch of a log-space TM for palindromes. Log-space computations. Example VU , SS 2018

Lecture 3. 1 Terminology. 2 Non-Deterministic Space Complexity. Notes on Complexity Theory: Fall 2005 Last updated: September, 2005.

Lecture 22: PSPACE

MTAT Complexity Theory October 13th-14th, Lecture 6

6.841/18.405J: Advanced Complexity Wednesday, February 12, Lecture Lecture 3

Lecture 6: Oracle TMs, Diagonalization Limits, Space Complexity

Time and space classes

The Polynomial Hierarchy

Undirected ST-Connectivity in Log-Space. Omer Reingold. Presented by: Thang N. Dinh CISE, University of Florida, Fall, 2010

Lecture 20: PSPACE. November 15, 2016 CS 1010 Theory of Computation

Notes on Complexity Theory Last updated: October, Lecture 6

15.1 Proof of the Cook-Levin Theorem: SAT is NP-complete

COL 352 Introduction to Automata and Theory of Computation Major Exam, Sem II , Max 80, Time 2 hr. Name Entry No. Group

Lecture 8: Complete Problems for Other Complexity Classes

Computational complexity

COMPLEXITY THEORY. PSPACE = SPACE(n k ) k N. NPSPACE = NSPACE(n k ) 10/30/2012. Space Complexity: Savitch's Theorem and PSPACE- Completeness

P = k T IME(n k ) Now, do all decidable languages belong to P? Let s consider a couple of languages:

Complete problems for classes in PH, The Polynomial-Time Hierarchy (PH) oracle is like a subroutine, or function in

Lecture 7: The Polynomial-Time Hierarchy. 1 Nondeterministic Space is Closed under Complement

CSE 555 HW 5 SAMPLE SOLUTION. Question 1.

198:538 Complexity of Computation Lecture 16 Rutgers University, Spring March 2007

Logarithmic space. Evgenij Thorstensen V18. Evgenij Thorstensen Logarithmic space V18 1 / 18

Lecture Notes Each circuit agrees with M on inputs of length equal to its index, i.e. n, x {0, 1} n, C n (x) = M(x).

6.045 Final Exam Solutions

PSPACE COMPLETENESS TBQF. THURSDAY April 17

Umans Complexity Theory Lectures

Lecture 8. MINDNF = {(φ, k) φ is a CNF expression and DNF expression ψ s.t. ψ k and ψ is equivalent to φ}

Lecture 5: The Landscape of Complexity Classes

The space complexity of a standard Turing machine. The space complexity of a nondeterministic Turing machine

P is the class of problems for which there are algorithms that solve the problem in time O(n k ) for some constant k.

Umans Complexity Theory Lectures

De Morgan s a Laws. De Morgan s laws say that. (φ 1 φ 2 ) = φ 1 φ 2, (φ 1 φ 2 ) = φ 1 φ 2.

Chapter 1 - Time and Space Complexity. deterministic and non-deterministic Turing machine time and space complexity classes P, NP, PSPACE, NPSPACE

1 Deterministic Turing Machines

Complexity Theory 112. Space Complexity

Review of Basic Computational Complexity

6.840 Language Membership

CS 151 Complexity Theory Spring Solution Set 5

Foundations of Query Languages

Lecture 8: Alternatation. 1 Alternate Characterizations of the Polynomial Hierarchy

1 Deterministic Turing Machines

Universality for Nondeterministic Logspace

Lecture 4 : Quest for Structure in Counting Problems

Lecture 24: Randomized Complexity, Course Summary

Time Complexity. Definition. Let t : n n be a function. NTIME(t(n)) = {L L is a language decidable by a O(t(n)) deterministic TM}

CSCI3390-Lecture 14: The class NP

Space Complexity. Master Informatique. Université Paris 5 René Descartes. Master Info. Complexity Space 1/26

Theory of Computation Space Complexity. (NTU EE) Space Complexity Fall / 1

Parallelism and Machine Models

Lecture 21: Space Complexity (The Final Exam Frontier?)

Notes for Lecture 3... x 4

Lecture 3: Nondeterminism, NP, and NP-completeness

CS151 Complexity Theory. Lecture 4 April 12, 2017

CS278: Computational Complexity Spring Luca Trevisan

P vs. NP Classes. Prof. (Dr.) K.R. Chowdhary.

Chapter 7: Time Complexity

CS154, Lecture 17: conp, Oracles again, Space Complexity

Lecture 10: Boolean Circuits (cont.)

Complexity Theory VU , SS The Polynomial Hierarchy. Reinhard Pichler

Outline. Complexity Theory EXACT TSP. The Class DP. Definition. Problem EXACT TSP. Complexity of EXACT TSP. Proposition VU 181.

CSCE 551 Final Exam, April 28, 2016 Answer Key

Circuits. Lecture 11 Uniform Circuit Complexity

Lecture 19: Interactive Proofs and the PCP Theorem

BBM402-Lecture 11: The Class NP

Database Theory VU , SS Complexity of Query Evaluation. Reinhard Pichler

DRAFT. Diagonalization. Chapter 4

Computational Complexity Theory. Markus Bläser, Holger Dell, Karteek Sreenivasaiah Universität des Saarlandes Draft June 15, 2015 and forever

Lecture 18: PCP Theorem and Hardness of Approximation I

CSCE 551 Final Exam, Spring 2004 Answer Key

CSCI 1590 Intro to Computational Complexity

conp, Oracles, Space Complexity

Lecture 23: More PSPACE-Complete, Randomized Complexity

CSC 5170: Theory of Computational Complexity Lecture 9 The Chinese University of Hong Kong 15 March 2010

Definition: conp = { L L NP } What does a conp computation look like?

Introduction to Computational Complexity

CSCI 1590 Intro to Computational Complexity

CMPT 710/407 - Complexity Theory Lecture 4: Complexity Classes, Completeness, Linear Speedup, and Hierarchy Theorems

CS154, Lecture 13: P vs NP

Complexity. Complexity Theory Lecture 3. Decidability and Complexity. Complexity Classes

Polynomial Time Computation. Topics in Logic and Complexity Handout 2. Nondeterministic Polynomial Time. Succinct Certificates.

Complexity (Pre Lecture)

U.C. Berkeley CS278: Computational Complexity Professor Luca Trevisan August 30, Notes for Lecture 1

More Complexity. Klaus Sutner Carnegie Mellon University. Fall 2017

Interactive Proofs 1

Lecture 4. 1 Circuit Complexity. Notes on Complexity Theory: Fall 2005 Last updated: September, Jonathan Katz

an efficient procedure for the decision problem. We illustrate this phenomenon for the Satisfiability problem.

Introduction to Complexity Theory. Bernhard Häupler. May 2, 2006

PSPACE, NPSPACE, L, NL, Savitch's Theorem. More new problems that are representa=ve of space bounded complexity classes

NP, polynomial-time mapping reductions, and NP-completeness

Transcription:

CSE200: Computability and complexity Space Complexity Shachar Lovett January 29, 2018 1 Space complexity We would like to discuss languages that may be determined in sub-linear space. Lets first recall the definitions of space complexity. Definition 1.1 (Space complexity for languages). Let S : N N be a (computable) function. A language L {0, 1} is in space S if there exists a TM M which computes L. The TM has two tapes: 1. A read-only input tape. 2. A read-write work tape. We assume that on any input x, when running M(x) the TM uses at most O(S( x )) many work tape cells. The class SP ACE(S) is the class of all such languages. Similar to nondeterministic time, we can also define nondeterministic space. The definition is the same, except that we allow a NDTM M. We denote by NSP ACE(S) the class of all such languages. We also define space complexity for non-boolean functions. For these, as the output is not boolean, we have to be a bit careful. Definition 1.2 (Space complexity for functions). A function f : {0, 1} {0, 1} is in (implicit) SP ACE(S) if (i) f(x) 2 O(S( x )). (ii) The language {(x, i) : f(x) i = 1} is in SP ACE(S). These together imply that we can compute specific bits of f(x) in space S( x ). In particular, specifying the coordinate i requires at most space S( x ). The definition can be adapted to functions in the obvious way, so we don t repeat it. Here are some particular classes of interest: 1

P SP ACE = c 1 SP ACE(n c ) NP SP ACE = c 1 NSP ACE(n c ) L = SP ACE(log n) NL = NSP ACE(log n) We would typically only consider space S(n) log n. The reason is that most algorithms require at least that much space, for example to describe a point. Surprisingly, many algorithms can be computed in sub-linear space. For example, summing two n-bit integers can be done in space O(log n). On the other hand, many efficient graph algorithms (such as BFS or DFS) require space linear in the number of vertices. Here are some more examples: Claim 1.3. SAT P SP ACE. Proof. We can enumerate the possible assignments and try each one. We will later show that in fact P H P SP ACE. Claim 1.4. Let BALANCED = {x {0, 1} Then BALANCED L. : x has the same number of 0 s and 1 s}. Proof. Go over the input and count the number of 0 s and 1 s, then compare them. The last claim is left as exercise. Claim 1.5. Let SUMEQUAL = {(x, y, z) {0, 1} : x + y = z}. Here, we consider x, y, z as integers written in binary. Then SUMEQUAL L. 2 Complete languages They are useful when studying com- Log-space reductions are space-efficient reductions. pleteness for space-bounded computation. Definition 2.1 (Logspace reductions). Let A, B be two languages. We say that A reduces to B under log-space reductions, denoted A < L B, if there exists a logspace computable function f : {0, 1} {0, 1} such that x A iff f(x) B. Claim 2.2. Let A, B, C be language. Let S(n) = Ω(log n) be a space bound. 1. If A < L B and B < L C then A < L C. 2. If A < L B and B SP ACE(S) then A SP ACE(S). 2

Proof. We only prove the first item, the second is similar. Let x be an input for A. Let f : {0, 1} {0, 1} be the reduction from A to B computable in log-space, and let g : {0, 1} {0, 1} be the reduction from B to C computable in log-space. We need to show that g(f(x)) is also computable in log-space To recall the definitions, we have three languages: A = {(x, i) : f(x, i) = 1}, B = {(x, i) : g(x, i) = 1}, C = {(x, i) : g(f(x)) i = 1}. We know that A, B are in log-space, and we want to prove that C is in log-space. Let M 1 be a TM deciding A and M 2 be a TM deciding B. Consider the following TM M 3 which will decide C. Let x be the input and let y = f(x). Consider running M 2 on (y, i), where in addition we keep a counter for the input head location of M 2. It the input head is at coordinate j then it needs to read y j. We compute it on-the-fly by computing f(x, j) using M 1. The total space requirements are: O(log n) for simulating M 2 (y); O(log n) for remembering the input head location; O(log n) for simulating M 1 (x). So the total space used is O(log n). Definition 2.3 (Complete languages). Let C be a class of computable languages. A language A is C-hard under logspace reductions if for any B C we have B < L A. If also A C then we say A is C-complete under logspace reductions. When C is a space class (like PSPACE or NL) we would abbreviate that A is simply C-complete, where the logspace reductions would be implicit. We next proceed to give a complete language for various space classes. Note that any language in L is (trivially) also L-complete, so to make it interesting, we should consider richer classes. 3 NL complete language The configuration of a deterministic Turing machine of space S(n) can be described in space O(S(n)) (tape contents; head locations; state). Hence, we can represent the Turing machine by a graph. Nodes of this graph are all the possible configurations, hence it will have 2 O(S(n)) many vertices. The transition function maps one configuration to the next, hence this is a directed graph with out-degree at most 1. The following is a warm-up. Claim 3.1. The following language is in L: CONN DEG1 = {(G, s, t) : G directed graph with out-degree 1, which has a path from s to t}. Proof. Assume G is given as an adjacency matrix. We can in logspace scan the input and for a vertex v find if it has an outgoing edge to another vertex u or not. Let v = s be the start vertex. Iteratively move v to the next vertex until either: v = t (in which case we accept), v has no outgoing edge (reject), we made more than n steps (run into a loop, reject). 3

Theorem 3.2. The following language is N L complete: CONN = {(G, s, t) : G directed graph with a path from s to t}. Proof. Lets show first that CONN NL. The proof is a description of a path from s to t, which is a sequence of nodes p = (v 0, v 1,..., v m ). The verifier on input G, s, t, p does the following: 1. Verify that v 0 = s and v m = t. 2. For i = 0,..., m 1, verify that the edge (v i, v i+1 ) is in G. Note that the verifier can be implemented in log-space, as each verification step can be done in O(log n) space, as well as the counter over i. Next, we show that CONN is NL-hard. Fix a language A NL. Let x be an input for which we want to check whether x A. We will build a function f(x) = (G, s, t) running in logspace so that x L f(x) CONN. Recall that by definition, A NL means that there exists a NDTM M running in space O(log x ) that computes L. We may assume that when M accepts an input, its head is in the start position and the work tape is empty, so that there is just one accepting configuration. Define a graph G = G x which describes its configuration graph: Vertice: Its vertices correspond to configurations of M. They include description of the tape head locations, the content of the work tape, and the state. These can be described using O(log x ) bits. Edges: Given two nodes u, v, there is a direct edge u v if the NDTM M can reach v from u using one of its two transition functions. Note that G has out-degree 2. Let s be the node corresponding to the initial configuration, t the accepting configuration. A path in G from s to t corresponds to a computation path of M that accepts x. Moreover, we can compute the vertices (or specific bits in them) and edges (i.e. adjacency matrix) in log-space, by simulating one step of M. So, the reduction is in log-space. The connectivity question makes sense also for undirected graphs. Surprisingly, this can be solved in log-space! Theorem 3.3 (Reingold 05). The following language is in L: UNCONN = {(G, s, t) : G undirected graph with a path from s to t}. 4

4 PSPACE complete language A quantified boolean formula (QBF) is a formula of the form ψ = Q 1 x 1 Q 2 x 2... Q n x n φ(x 1,..., x n ) where Q i {, } and φ is an unquantified boolean formula. This extends CNFs or DNFs, which allow for only one quantifier. Define the language: T QBF = {true QBF formulas}. Theorem 4.1. TQBF is PSPACE-complete under poly-time reductions (and even under logspace reductions). Proof. We first show T QBF P SP ACE. Given a QBF ψ = Q 1 x 1... Q n x n φ(x 1,..., x n ) define f i (ψ; a 1,..., a i ) = Q i+1 x i+1... Q n x n φ(a 1,..., a i, x i+1,..., x n ). This is a partial assignment to some of the inputs. We want to compute ψ = f 0 (ψ), we will do so inductively. If Q i+1 = then f i (ψ; a 1,..., a i ) = f i+1 (ψ; a 1,..., a i, 0) f i+1 (ψ; a 1,..., a i, 1), and if Q i+1 = then f i (ψ; a 1,..., a i ) = f i+1 (ψ; a 1,..., a i, 0) f i+1 (ψ; a 1,..., a i, 1). Assume we have a Turing machine M i computing f i in space S i. Then M i runs M i+1 twice (using the same memory), plus it needs O(1) overhead memory for bookkeeping. Clearly f n is computable in P and in particular in P SP ACE. Hence f 0 is computable in PSPACE. We next argue that TQBF is PSPACE-hard. Let L P SP ACE. Let G be the configuration graph of L, where we can think of the input as part of the configuration if we wish. Each vertex of G is represented by m = poly(n) bits. Define ψ i (u, v) = there is a path from u to v of length 2 i. Let s, t be the vertices representing the start and end configuration. compute ψ m (u, v). A simple way to do so is by using We would like to ψ i+1 (u, v) = w, ψ i (u, w) ψ i (w, v). However, this will produce a formula of exponential size. To get a formula of polynomial size, we use the trick ψ i+1 (u, v) = w a, b((a = u b = w) (a = w b = v)) = ψ i (a, b). Clearly we can transfer ψ i to ψ i+1 in polynomial time. With some care, we can make this into a CNF. We some more care, this entire operation can actually be done in logspace. 5

5 Nondeterministic vs deterministic space It is conjectured that NL = L. The following theorems provide some partial answers. Theorem 5.1 (Savitch 70). NSP ACE SP ACE(S 2 (n)). In particular, NP SP ACE = P SP ACE and NL L 2 = SP ACE(log 2 n). Proof. This is a stripped down version of the proof that T QBF P SP ACE. Let L NSP ACE(S). Let x {0, 1} n be a potential input for which we want to decide if x L. Let G be the configuration graph of a Turing machine deciding L on x. Membership in L is equivalent to checking if there is a directed path in G between vertices s, t. Define Then reach i (u, v) = there is a path between u and v of length 2 i. reach i+1 (u, v) = w, reach i (u, w) reach i (w, v). In order to compute reach i+1 we run reach i twice, but since we run it sequentially both applications can use the same space. We need additional space S(n) to enumerate the value of w. Note that as G 2 S(n) then there is a path between s, t if reach S(n) (s, t) = 1. This takes O(S 2 (n)) space to compute. Let CONN = {(G, s, t) : there is no directed path in G from s to t} We can define conl (languages which can be refuted in logspace) and get that CONN is conl-complete. It was a surprise when it was proven in the late 80s that conl=nl. Theorem 5.2 (Immerman 88,Szelepcsényi 87). CONN NL. Hence, NL = conl. Proof. Recall that by definition, a language L is in nondeterministic logspace if there exists a NDTM which computes L and runs in log-space. For the proof it will be easier to consider an equivalent model: L NL if there exists a TM M and c > 0 such that (i) x L y, y x O(1), M(x, y) = 1 (ii) M(x, y) uses O(log x ) space (iii) y is given in a separate tape (witness tape) on which the head can only move to to the right. That is, y is read one symbol at a time, and the head can never return to previously read symbols. The equivalence proof is basically as follows: y describes the sequence of choices of which transition function of the NDTM to take. Let G be a directed graph and s, t vertices in G. Define C i = {vertices reachable from s by at most i steps}. 6

We already know that there is a witness that v C i which can be verified in logspace: a list of vertices v 0,..., v j for j i such that v 0 = s, v j = v and each two nodes in the path are adjacent. Note that indeed, this witness can be verified in log-space while being read from left to right. The length of this witness is O(n log n). We will show that there exist more complicated witnesses, still verifiable in space O(log n) and in a read-once left-to-right fashion, for: (a) A witness that v / C i, given that we know C i. (b) A witness for C i, given that we know C i 1. (a) A witness for v / C i given that we know C i is the following: for each u C i, in increasing order, we give the witness that u C i. The verifier then verifies that: The nodes u are in increasing order; The witness for each u is correct; The number of u encountered equals C i. The node v was not one of the nodes u given. Observe that the verifier needs only log-space, and reads the witness from left to right. The length of the witness is O( C i n log n) = O(n 2 log n). (b) A witness for C i given C i 1 is the following: for each vertex v G in order, Whether v C i or not; If v C i, a node w such that (w, v) G and a witness that w C i 1. If v / C i, for every node w such that (w, v) G, a witness that w / C i 1. Observe that the verifier needs only log-space, and reads the witness from left to right. The length of the witness is O(n 3 log n). The final proof is a concatenation: a first that C 0 = 1; then a witness for C 1 given that we know C 0, then a witness for C 2 given that we know C 1, and so on. At the end, we add a witness that t / C n given that we know C n. 7