Definition: Alternating time and space Game Semantics: State of machine determines who

Similar documents
Definition: Alternating time and space Game Semantics: State of machine determines who

INAPPROX APPROX PTAS. FPTAS Knapsack P

Lecture 9: PSPACE. PSPACE = DSPACE[n O(1) ] = NSPACE[n O(1) ] = ATIME[n O(1) ]

-bit integers are all in ThC. Th The following problems are complete for PSPACE NPSPACE ATIME QSAT, GEOGRAPHY, SUCCINCT REACH.

Complete problems for classes in PH, The Polynomial-Time Hierarchy (PH) oracle is like a subroutine, or function in

1 Circuit Complexity. CS 6743 Lecture 15 1 Fall Definitions

Lecture 5: The Landscape of Complexity Classes

Foundations of Query Languages

Circuits. Lecture 11 Uniform Circuit Complexity

Lecture 11: Proofs, Games, and Alternation

Parallelism and Machine Models

Essential facts about NP-completeness:

Notes on Computer Theory Last updated: November, Circuits

1 PSPACE-Completeness

Lecture 17: Cook-Levin Theorem, NP-Complete Problems

CS151 Complexity Theory. Lecture 1 April 3, 2017

Umans Complexity Theory Lectures

Ταουσάκος Θανάσης Αλγόριθμοι και Πολυπλοκότητα II 7 Φεβρουαρίου 2013

Lecture Notes Each circuit agrees with M on inputs of length equal to its index, i.e. n, x {0, 1} n, C n (x) = M(x).

Lecture 8. MINDNF = {(φ, k) φ is a CNF expression and DNF expression ψ s.t. ψ k and ψ is equivalent to φ}

CS601 DTIME and DSPACE Lecture 5. Time and Space functions: t,s : N N +

JASS 06 Report Summary. Circuit Complexity. Konstantin S. Ushakov. May 14, 2006

CS154, Lecture 17: conp, Oracles again, Space Complexity

CSC 5170: Theory of Computational Complexity Lecture 9 The Chinese University of Hong Kong 15 March 2010

CS154, Lecture 15: Cook-Levin Theorem SAT, 3SAT

The Polynomial Hierarchy

Lecture 7: Polynomial time hierarchy

Intro to Theory of Computation

A Complexity Theory for Parallel Computation

Umans Complexity Theory Lectures

Database Theory VU , SS Complexity of Query Evaluation. Reinhard Pichler

PRAM lower bounds. 1 Overview. 2 Definitions. 3 Monotone Circuit Value Problem

Lecture 3: Reductions and Completeness

Lecture 22: PSPACE

Time Complexity. Definition. Let t : n n be a function. NTIME(t(n)) = {L L is a language decidable by a O(t(n)) deterministic TM}

Computability and Complexity Theory: An Introduction

CSE 555 HW 5 SAMPLE SOLUTION. Question 1.

The space complexity of a standard Turing machine. The space complexity of a nondeterministic Turing machine

Complexity: Some examples

Notes for Lecture 3... x 4

Notes on Complexity Theory Last updated: October, Lecture 6

Algorithms & Complexity II Avarikioti Zeta

Complexity. Complexity Theory Lecture 3. Decidability and Complexity. Complexity Classes

CSCI 1590 Intro to Computational Complexity

Lecture 7: The Polynomial-Time Hierarchy. 1 Nondeterministic Space is Closed under Complement

6.841/18.405J: Advanced Complexity Wednesday, February 12, Lecture Lecture 3

DRAFT. Algebraic computation models. Chapter 14

CS 580: Algorithm Design and Analysis. Jeremiah Blocki Purdue University Spring 2018

CSC 5170: Theory of Computational Complexity Lecture 4 The Chinese University of Hong Kong 1 February 2010

COMPLEXITY THEORY. PSPACE = SPACE(n k ) k N. NPSPACE = NSPACE(n k ) 10/30/2012. Space Complexity: Savitch's Theorem and PSPACE- Completeness

Models of Computation

Algorithm Design and Analysis

UC Berkeley CS 170: Efficient Algorithms and Intractable Problems Handout 22 Lecturer: David Wagner April 24, Notes 22 for CS 170

conp, Oracles, Space Complexity

9. PSPACE 9. PSPACE. PSPACE complexity class quantified satisfiability planning problem PSPACE-complete

Announcements. Friday Four Square! Problem Set 8 due right now. Problem Set 9 out, due next Friday at 2:15PM. Did you lose a phone in my office?

The Cook-Levin Theorem

Introduction to Complexity Theory. Bernhard Häupler. May 2, 2006

9. PSPACE. PSPACE complexity class quantified satisfiability planning problem PSPACE-complete

CS21 Decidability and Tractability

Lecture 8: Alternatation. 1 Alternate Characterizations of the Polynomial Hierarchy

Undecidability and Rice s Theorem. Lecture 26, December 3 CS 374, Fall 2015

CSE 200 Lecture Notes Turing machine vs. RAM machine vs. circuits

Chapter 9. PSPACE: A Class of Problems Beyond NP. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

CS154, Lecture 13: P vs NP

Lecture 21: Algebraic Computation Models

Complexity Theory VU , SS The Polynomial Hierarchy. Reinhard Pichler

Outline. Complexity Theory EXACT TSP. The Class DP. Definition. Problem EXACT TSP. Complexity of EXACT TSP. Proposition VU 181.

P versus NP: Approaches, Rebuttals, and Does It Matter?

Week 2: Defining Computation

2 Evidence that Graph Isomorphism is not NP-complete

Finish K-Complexity, Start Time Complexity

Computability and Complexity CISC462, Fall 2018, Space complexity 1

MTAT Complexity Theory October 20th-21st, Lecture 7

Definition: conp = { L L NP } What does a conp computation look like?

Logarithmic space. Evgenij Thorstensen V18. Evgenij Thorstensen Logarithmic space V18 1 / 18

NP-Completeness. A language B is NP-complete iff B NP. This property means B is NP hard

Lecture 9: Polynomial-Time Hierarchy, Time-Space Tradeoffs

Boolean circuits. Lecture Definitions

Theory of Computation Space Complexity. (NTU EE) Space Complexity Fall / 1

Lecture 3 (Notes) 1. The book Computational Complexity: A Modern Approach by Sanjeev Arora and Boaz Barak;

2 P vs. NP and Diagonalization

CS 151 Complexity Theory Spring Solution Set 5

SOLUTION: SOLUTION: SOLUTION:

Polynomial Time Computation. Topics in Logic and Complexity Handout 2. Nondeterministic Polynomial Time. Succinct Certificates.

Week 3: Reductions and Completeness

Theory of Computation CS3102 Spring 2015 A tale of computers, math, problem solving, life, love and tragic death

Notes for Lecture 3... x 4

CS151 Complexity Theory

NP-Completeness and Boolean Satisfiability

U.C. Berkeley CS278: Computational Complexity Professor Luca Trevisan August 30, Notes for Lecture 1

Complexity and NP-completeness

CSE 135: Introduction to Theory of Computation NP-completeness

Lecture 20: PSPACE. November 15, 2016 CS 1010 Theory of Computation

MACHINE COMPUTING. the limitations

Introduction to Computational Complexity

CS 301. Lecture 18 Decidable languages. Stephen Checkoway. April 2, 2018

Lecture 24: Randomized Complexity, Course Summary

CS154, Lecture 13: P vs NP

Space and Nondeterminism

Transcription:

CMPSCI 601: Recall From Last Time Lecture 3 Definition: Alternating time and space Game Semantics: State of machine determines who controls, White wants it to accept, Black wants it to reject. White wins the -game on input. Examples: 1. MCVP ASPACE. QSAT ATIME Theorem: NSPACE For, ATIME! DSPACE "! ASPACE DTIME$#&%(')"'+*-,.,/ Corollary: ASPACE 0 1 P ATIME %('34, PSPACE ASPACE %('34, EXPTIME 1

CMPSCI 601: Parallel Computation Lecture 3 The Turing machine and the abstract RAM are sequential machines, in that they perform only one operation at a time. Real computers are largely sequential as well, but: Modern computer networks allow us to apply many processors to the same problem (e.g., SETI@home), Modern programming languages allow for parallel execution threads, Modern processors are slightly parallel, with the capacity to do a few things at the same time, There have been some experimental massively parallel computers such as the Connection Machine, and The circuit elements inside a given chip operate in parallel.

Can we solve any problem a million times faster by applying a million parallel processors to it? Probably not, but as with the P vs. NP question we don t have any theorems confirming our intuition. Parallel complexity theory studies the resources needed to solve problems in parallel. To begin such a study we need a formal model of parallel computation, analogous to the Turing machine or RAM. As it turns out, just as the TM and RAM have similar behavior with respect to time and space, various different parallel models have similar behavior with respect to parallel time and amount of hardware. 3

Parallel Random Access Machines CRAM CRCW-PRAM-TIME[ ]-HARD[ % '34, ] synchronous, concurrent read and write, uniform, % '34, processors and memory P r P 1 Global P Memory P 3 P 4 P 5 priority write: lowest number processor wins conflict common write: write conflict crashes machine 4

The alternating Turing machine is another parallel model of sorts, since the acceptance behavior depends on the entire set of configurations. The parallel time measure turns out to be the number of alternations between existential and universal states: Theorem 3.1 For %('34,, CRAM is equal to the class of languages of ATM s with space 1 and " alternations. We won t prove this here (I might put a piece of the proof on HW#8), but we ll show that ATM s are closely related to another parallel computing model, that of boolean circuits. First, however, a digression: 5

CS601/CM730-A: LH and PH Lecture 3 An important special case of ATM computation is when the number of alternations is bound by a constant. We use the same names for constant-alternation classes that we defined for the Arithmetic Hierarchy in HW#5. For example, P consists of the languages of poly-time ATM s that always stay in existential states, that is, NP. P is the same for only universal states, that is, co-np. P consists of the languages of poly-time ATM s that have a phase of existential configurations followed by a phase of universals. P is the complement of P, and so on. PH is defined to be the union of P and P for all constant, or languages of poly-time ATM s with - alternations. Theorem 3. PH SO. The proof is a simple generalization of Fagin s Theorem, NP SO. 6

The Logtime Hierarchy On Spring 003 s HW #7 we looked at ATM s that operate in 1 time, making key use of their randomaccess input tape. We proved then that such an ATM can decide: any language in FO, and also the PARITY language, of strings with an odd number of 1 s PARITY is not in FO, though we won t be able to prove such a lower bound in this course. The complexity class LH, the log-time hierarchy, is the set of languages decidable in ATIME with - alternations. A careful solution shows that FO LH. It turns out that with the right definition of FO, FO LH. On this term s HW#7 we look at the similar logspace hierarchy LSH, which you ll prove collapses to (is equal to) NL. 7

* CMPSCI 601: Circuit Complexity Lecture 3 Real computers are built from many copies of small and simple components. Circuit complexity uses circuits of boolean logic gates as its model of computation. Circuits are directed acyclic graphs. Inputs are placed at the leaves. Signals proceed up toward the root,. The code is straight-line in that gates are not reused (because the graph is acyclic. In fact the boolean straightline programs of Lecture 1 and HW#1 were simply a reformulation of boolean circuits. Circuit Families and Languages: Let be a language (or decision problem). Let, Each circuit Definition: *, be a circuit family. has input bits and one output bit. N computes iff for all and for all 8

or and or and or r t(n) not b 1 b 1 b b It turns out that NOT gates in the middle of the circuit can be pushed down to the bottom and eliminated without changing the parameters of the circuit, so we will keep circuits in this form. b n b n 9

Size and Depth: The depth of the circuit is the length of the longest path from an input to an output. (Compare the depth of an SLP in HW#1.) The depth measures the parallel time, the total delay for the circuit to compute its value, in terms of the individual gate delays. The size of the circuit is the number of gates, counting the input gates and their negations. This is the length of the corresponding SLP. It represents the computational work used to solve the problem, and corresponds roughly to the sequential time needed to evaluate the circuit. These size and depth parameters become functions of the input size once we consider a circuit family instead of a single circuit. These become our cost measures and we use them to define complexity classes. 10

* CMPSCI 601: Circuit Uniformity Lecture 3 Consider the class PSIZE of languages that are computed by a family of poly-size circuits. That is, for each, there is a circuit that accepts an input string iff. It is easy to see from our construction for Fagin s Theorem that P. Also since CVP is in P, it seems that PSIZE should be no more powerful than P. But as we ve defined PSIZE, it contains undecidable languages! Look at for example. For any input length, there is a one-gate circuit that decides whether the input is in : it either says yes or says no without looking at the input at all. So is in PSIZE, but it s clearly r.e.-complete as it s just a recoding of. 11

* * But this circuit is non-uniform. Given a number, it is impossible for any Turing machine, much less a polytime TM, to determine what is. Let s define P-uniform PSIZE to be those languages decided by poly-time circuit families where we can compute from the string * in %('34, time. Now it s easy to see that P-uniform PSIZE is contained in P, because our machine on input can first build the circuit and then solve the CVP problem that tells what does on input. And in fact the tableau construction tells us that P is contained in P-uniform PSIZE, because the circuit from the tableau is easy to construct. In fact it s very easy to construct we could do it in L or even in FO. Thus we have: Theorem: P = P-uniform PSIZE = L-uniform PSIZE = FO-uniform PSIZE. 1

* * * Definition 3.3 (The NC Hierarchy) Let be a polynomially bounded function and let Then is in the circuit complexity class NC, AC, ThC, respectively iff there exists a uniform family of circuits with the following properties: 1. For all. The depth of is ". 3. %('., 4. The gates of consist of: NC AC ThC bounded fan-in unbounded fan-in unbounded fan-in and, or gates and, or gates threshold gates ^ ^ k 13

For, NC AC ThC NC AC ThC NC NC AC ThC We will see that the following inclusions hold: AC ThC NC L NL AC AC ThC NC AC AC ThC NC AC.... AC ThC NC NC 14

* The word uniform above means that the map, * is very easy to compute, for example, L or FO. Though these uniformity conditions are a subject dear to my heart, we won t worry too much about the details of them in this course. Overall, NC consists of those problems that can be solved in poly-log parallel time on a parallel computer with polynomially much hardware. The question of whether P NC is the second most important open question in complexity theory, after the P NP question. You wouldn t think that every problem in P can be sped up to polylog time by parallel processing. Some problems appear to be inherently sequential. If we prove that a problem is P-complete, we know that it is not in NC unless P NC. 15

Theorem 3.4 CVP, MCVP (monotone CVP) and HORN- SAT are all P-complete. Proof: CVP is the set of pairs such that circuit accepts input string (which must thus be of the right length). This is pretty clearly in P because we can evaluate gate by gate. To reduce an arbitrary P language to CVP, we use the tableau construction from the Fagin or Cook-Levin proofs (or from the proof that P is contained in alternating logspace). Since each cell of the tableau depends on those just below it, and any function of - boolean inputs has circuits of - size and depth (HW#1), we can build a circuit that computes each cell value. The output gate of this circuit tells us whether the machine accepts the input by looking at the first cell of the tape at the last time step. The monotone problem MCVP is clearly still in P. To reduce CVP to MCVP we can use double-rail logic. For each gate of the orginal circuit we make two gates, one to compute and the other to compute. If we have done this inductively for s children, it s easy to do it for with AND and OR gates (there are four cases). 16

HORN-SAT is the language of satisfiable boolean formulas in HORN-CNF AND s of clauses, each of which is of the form where the s and are variables (not negated). We solve it in P with a greedy algorithm, starting with all variables false and setting true only those required by the clauses, in successive passes. To reduce MCVP to HORN-SAT, we make a variable for each gate and one or two clauses for the effect of each gate. If is an AND gate with children and, we add the Horn clause. If is the OR of and, we add and. We then force the inputs to the correct values and force the output gate s variable to 1, and we are satisfiable iff the circuit actually computes 1. 17

co-r.e. complete co-r.e. Arithmetic Hierarchy r.e. r.e. complete Recursive Primitive Recursive EXPTIME co-np complete PSPACE Polynomial-Time Hierarchy co-np NP NP U co-np NP complete P "truly feasible" NC NC log(cfl) NSPACE[log n] DSPACE[log n] Regular Logarithmic-Time Hierarchy SAC 1 NC AC ThC 0 1 0 18