Independent Study Complexity Theory

Size: px
Start display at page:

Download "Independent Study Complexity Theory"

Transcription

1 Independent Study Complexity Theory Ryan Dougherty Contents 1 Introduction & Preface 3 2 Review (Un)Decidability Reducibility Logical Theories Oracle TMs Computational Complexity Space Complexity Relativized Complexity Polynomial Hierarchy, Alternating TMs Polynomial Hierarchy Alternating TMs Exercises Boolean Circuits CKT SAT Advice! AC, NC Parity & Switching Lemma ACC Circuits TC Hierarchy Randomization RP, corp, ZPP Randomized Reductions BPL, RL

2 CONTENTS Ryan Dougherty 6 Interactive Proofs IP IP = PSPACE AM, MA PCP Theorem Hardness of Approximation Equivalence of the Theorems Decision Trees 36 9 Counting Complexity #P-completeness #PSPACE Toda s Theorem Derandomization Proving Toda s Theorem Parameterized Complexity Parameterized Reductions paranp W Hierarchy

3 Introduction & Preface Ryan Dougherty 1 Introduction & Preface Welcome to this series of lecture notes! The main book that the material comes from is Arora and Barak s Computational Complexity book [AB09]. Some material that is assumed from the reader (and is referenced in Section 2) is from Sipser s Introduction to the Theory of Computation book [Sip12]. We assume that the reader has a reasonable understanding of the following material: {Regular, Context-free, Turing-decidable, Turing-recognizable} languages, and their machine counterparts (Un)decidability Reducibility Recursion theorem Time complexity Space complexity 3

4 Review Ryan Dougherty 2 Review This section highlights many of the key definitions and theorems studied in a first-year graduate (or advanced undergraduate) course in complexity theory. We assume the reader knows about finite automata (DFAs/NFAs), grammars (CFGs), and Turing machines (TMs), and their respective language classes. This review roughly covers the first four chapters of [AB09], and the first eight (and part of the ninth) chapters of [Sip12]. 2.1 (Un)Decidability Definition A TM is a decider if it halts (accepts or rejects) on every input. A language B is decidable if there exists a decider D such that L(D) = B. A language C is undecidable if C is not decidable. Theorem The following are decidable: A DF A = { M, w : M is a DFA that accepts w}. E DF A = { M : M is a DFA whose language is empty}. ALL DF A = { M : M is a DFA whose language is Σ }. EQ DF A = { M 1, M 2 : M 1 and M 2 are DFAs and L(M 1 ) = L(M 2 )}. A CF G = { G, w : G is a CFG that generates w}. E CF G = { G : L(G) is empty}. Theorem The following are undecidable: ALL CF G = { G : G is a CFG and L(G) = Σ }. EQ CF G = { G 1, G 2 : G 1 and G 2 are CFGs and L(G 1 ) = L(G 2 )}. A T M = { M, w : M is a TM that accepts w}. Theorem The class of decidable languages is closed under complement. Definition A language B is Turing-recognizable (or recognizable) if there exists a TM that recognizes B. A language C is co-turing-recognizable (or co-recognizable) if it is the complement of some Turing-recognizable language. Theorem A T M is not co-recognizable. Theorem A language B is decidable if and only if B is recognizable and co-recognizable. 4

5 2.2 Reducibility Ryan Dougherty 2.2 Reducibility Definition A function f : Σ Σ is a computable function if there exists a TM that, on input w, halts with f(w) on its tape. A language A is mapping-reducible to language B, written A m B, if there exists a computable function f such that w A if and only if f(w) B. Theorem If A m B and B is decidable, then A is decidable; if A is undecidable, then B is undecidable; if B is recognizable, then A is recognizable; if A is not recognizable, then B is not recognizable. Corollary HALT T M = { M, w : M is a TM that halts on input w} is undecidable. Definition A TM s language has a property P (a subset of all TM descriptions) such that whenever M 1, M 2 are TMs, and L(M 1 ) = L(M 2 ), M 1 P if and only if M 2 P. A property P is nontrivial if some TM has property P and some other TM does not. Theorem (Rice s Theorem). Deciding whether a TM has a nontrivial property P of its language is undecidable. Theorem EQ T M = { M 1, M 2 : M 1, M 2 are TMs and L(M 1 ) = L(M 2 )} is undecidable; also, it is neither recognizable nor co-recognizable. Definition A configuration of a TM on input w = w 1 w n in state q is w 1 w i 1 qw i w n. A computation history is a set of configurations delimited by an extra symbol #: #C 1 #C 2 # #C l #, where C i logically yields C i+1. An accepting computation history is one such that C 1 is the start configuration, and C l is an accepting one. Definition A linear bounded automaton (LBA) is a TM that does not allow to move the tape head past the right end of the input. Theorem A LBA = { M, w : M is an LBA that accepts w} is decidable. Definition The Post Correspondence Problem (PCP) is a puzzle, with a given set of tiles with nonempty top strings and nonempty bottom strings. The objective is to list the tiles, repetitions allowed, such that the concatenation of the top strings of all the chosen tiles equals the same of the bottom strings. Theorem PCP is undecidable. Theorem (Recursion Theorem). Let a TM T compute a function t : Σ Σ Σ. Therefore, there exists a TM R that computes a function r : Σ Σ, such that r(w) = t( R, w) for all w. In other words, every TM can obtain their own description. Definition A TM M is minimal if there does not exist a TM N that has fewer states and L(M) = L(N). Theorem MIN T M = { M : M is a TM and is minimal} is not recognizable. 5

6 2.3 Logical Theories Ryan Dougherty 2.3 Logical Theories Definition A formula over some operations is atomic if R i over variables x 1,, x l is a relation of arity l. A formula φ is well-formed if it is atomic, a formula formed from other atomic formulas using the operations, or of the form x[φ 1 ] or x[φ 1 ] where φ 1 is a well-formed formula. A variable is bounded if it is within the scope of a quantifier, and free otherwise. A well-formed formula with no free variables is a sentence or a statement. The universe is the set of possible values for each variable, and the model specifies the universe and relations used. The theory of a model M, called T h(m), is the set of true statements. A formula in prenex normal form is one that has all quantifiers appear first. Theorem T h(n, +) is decidable. Theorem T h(n, +, ) is undecidable. Definition A formal proof of a statement φ is a sequence of statements S 1,, S l where S l = φ, where each S i follows logically from preceding statements and axioms (statements not requiring a proof). If all provable statements are true, then the system is sound; if all true statements are provable, then the system is complete. Theorem P rovable(n, +, ) = {set of statements in (N, +, ) that have proofs} is recognizable. Theorem There exists a true, but unprovable statement in T h(n, +, ). 2.4 Oracle TMs Definition An oracle TM M is a TM with an oracle tape that, when the TM writes a string onto this tape, invokes the oracle (of a language L) and decides membership of the written string in L in zero time, and returns a yes or no answer (written as M L ). A language A is decidable relative to a language B A T B if there is a TM M B that decides A. A is Turing-reducible to B if and only if A T B. Theorem E T M T A T M. Theorem A T M = { M, w : M is a TM with an oracle for A T M and M accepts w} is undecidable relative to A T M. Definition The minimal description of a string x (d(x))is the shortest string M, w where TM M, on input w, halts with x on the tape. The descriptive complexity of x (K(x)) is d(x) Theorem K(x) x + c for a constant c. Theorem K(xx) x + d for a constant d. Theorem K(xy) 2 log 2 (K(x)) + K(x) + K(y) + e for a constant e. Definition A string x is incompressible if K(x) x. Theorem At least half of all strings of length n are incompressible. Theorem K(x) is not computable. Theorem No infinite subset of the set of incompressible strings is recognizable. 6

7 2.5 Computational Complexity Ryan Dougherty 2.5 Computational Complexity Definition T IME(f(n)) (NT IME(f(n))) is the set of languages decidable within O(f(n)) steps on a single-tape deterministic (nondeterministic) TM. P= k 0 T IME(nk ), NP= k 0 NT IME(nk ). Decision on a NTM has that every computation branch halts, time is the number of transitions on the longest computation path, and space is the maximum number of cells visited on any computation path. Theorem The following are members of p: All regular languages All context-free languages P AT H = { G, s, t : G is an undirected graph having a path from s to t} Definition A verifier is a TM that accepts a string w and a certificate c, and verifies whether c is valid. Theorem NPcan also be defined as the set of languages with a polynomial-time verifier. Definition A language A is polynomial-time reducible to a language B A p B if the reduction takes polynomial time. Theorem Suppose A p B. If B P, then A P; if A / P, then B / P. Definition A boolean formula φ in conjunctive normal form is one that is a conjunction of clauses, and each clause is a disjunction of literals. A formula in 3CNF has 3 literals per clause. A formula is satisfiable if there exists an assignment to the variables to make the formula true. Theorem SAT = { φ : φ is a 3CNF formula that is satisfiable} NP, and 3SAT p CLIQUE = { G, k : G is a graph with a k-clique}. Definition A language B is NP-complete if B NP, and for every A NP, A p B. If only the second condition is true, then B is NP-hard. Theorem The following are NP-complete: 3SAT (the Cook-Levin theorem) CLIQUE INDSET (same as CLIQUE but no edges between vertices) VERTEX COVER (whether there exists a subset of vertices of size k such that every edge involves a vertex in the subset) HAMPATH (whether a directed graph contains a directed path through every vertex exactly once) UHAMPATH (undirected version of HAMPATH) 7

8 2.6 Space Complexity Ryan Dougherty 2.6 Space Complexity Definition PSPACE= k 0 SP ACE(nk ), NPSPACE= k 0 NSP ACE(nk ). Theorem For f(n) n, T IME(f(n)) SP ACE(f(n)). Theorem (Savitch s Theorem). For f(n) n, NSP ACE(f(n)) SP ACE(f 2 (n)). Corollary PSPACE= NPSPACE. Definition A language B Is PSPACE-complete if B PSPACE, and for every A PSPACE, A p B. If only the second condition is true, then B is PSPACE-hard. Theorem The following are PSPACE-complete: TQBF = { ψ : ψ is a true quantified boolean formula} (i.e., of the form ψ = Q 1 x 1 Q n x n φ(x 1,, x n ) where the Q i {, }). FORMULA-GAME (a 2-player version of TQBF, where players take turns choosing values for the variables in order) Generalized Geography (a directed graph where each vertex is a string, and each edge has the next string start with the same letter as the previous one) A LBA Definition L= SP ACE(log(n)), NL= N SP ACE(log(n)). Definition A log-space transducer is a deterministic TM with read-only input, writeonly output, and a read-write work tape, and the space it uses is equal to the length of the non-blank portion of the work tape + log(size of input) + log(size of output). A language A is log-space reducible to a language B if there is a log-space transducer that computes a function f for which w B if and only if f(w) A. A language B Is NL-complete if B NL, and for every A NL, A L B. If only the second condition is true, then B is NL-hard. Theorem (Immerman-Szelepcsényi Theorem). PATH = { G, s, t : G is a directed graph with a directed s t path} is NL-complete, and conl-complete. Corollary NL= conl. Definition A function f is space-constructible if there is a TM that computes the function mapping 1 n (in unary) to f(n) (in binary) in O(f(n)) space. Theorem (Space Hierarchy Theorem). If f is a space-constructible function, then there exists a language that can be decided in O(f(n)) space, but not in o(f(n)) space. Corollary PSPACE EXPSPACE. Corollary NL PSPACE. Definition A function f, which is Ω(n log(n)), is time-constructible if there is a TM that computes the function mapping 1 n (in unary) to f(n) (in binary) in O( f(n) log(n) ) time. 8

9 2.7 Relativized Complexity Ryan Dougherty Theorem (Time Hierarchy Theorem). If f is a time-constructible function, then there exists a language that can be decided in O(f(n)) time, but not in o( f(n) log(f(n) ) time. Corollary P EXP. Corollary For any 1 < c < d, T IME(n c ) T IME(n d ). Definition A language B Is EXPSPACE-complete if B EXPSPACE, and for every A EXPSPACE, A p B. If only the second condition is true, then B is EXPSPACE-hard. = {regular expressions with exponentiation} is EXPSPACE- Theorem EQ REX complete. 2.7 Relativized Complexity Definition P A (resp., NP A ) = {L : L is decided by an oracle TM with an oracle for A in deterministic (nondeterministic) polynomial time}. Theorem There exist oracles A, B such that P A = NP A, and P B NP B. 9

10 Polynomial Hierarchy, Alternating TMs Ryan Dougherty 3 Polynomial Hierarchy, Alternating TMs 3.1 Polynomial Hierarchy From Theorem 2.7.2, we have a notion of using P and NP with the power of oracle machines. And of course, we have a hierarchy of EXPTIME classes (singly exponential, doubly exponential, ). However, we don t have a generalization of a hierarchy of such oracle machines (the theorem only concerns the first level ). Therefore, in [MS72], the notion of a polynomial hierarchy was created. The hierarchy is defined (equivalently) as follows: Σ P 0 = Π P 0 = P, Σ P 1 = NP, Π P 1 = conp, Σ P i = NP ΣP i 1 for i 2, Π P i = conp ΠP i 1 for i 2. In Figure 1, we show the hierarchy with inclusions between each level.. NP NPNP = Σ P 3 Π P 3 = conp conpconp NP NP = Σ P 2 Π P 2 = conp conp NP = Σ P 1 Π P 1 = conp Σ P 0 = P = Π P 0 Figure 1: Polynomial Hierarchy, taken from time hier Each arrow represents inclusion: for example, Σ P 1 Σ P 2. Definition The polynomial hierarchy, called PH, is defined to be: PH = Σ P k. k=1 However, there is an alternate definition of the polynomial hierarchy that we will now use: 10

11 3.1 Polynomial Hierarchy Ryan Dougherty Definition A language L Σ P i such that x L if and only if: if there is a poly-time TM M and a polynomial p u 1 {0, 1} p( x ) u 2 {0, 1} p( x ) Q i u i {0, 1} p( x ) M(x, u 1,, u i ) accepts. where Q i is either or depending on whether i is odd or not. Also, define PH as before. We prove that the definitions are equivalent. It is sufficient to show the following: Theorem For all i 2, Σ P i = NP Σ i 1SAT (the Σ P i is defined the new way, and NP Σ i 1SAT is defined the old way). Proof. We can prove the i = 2 case as higher i values work the same way. So, we need to prove Σ P 2 (defined the new way) is equal to NP SAT. If L Σ P 2, then there exists a poly-time TM M and a polynomial p such that x L if and only if (by definition): u 1 {0, 1} p( x ) u 2 {0, 1} p( x ) M(x, u 1, u 2 ) accepts. The part u 2 {0, 1} p( x ) M(x, u 1, u 2 ) accepts is exactly the negation of ( u 2 {0, 1} p( x ) M(x, u 1, u 2 ) accepts), which is in NP. We can use the SAT oracle for this. Therefore, there exists a NDTM N that, given oracle access to SAT, decides L (by nondeterministically guessing u 1 ). This shows Σ P 2 NP SAT. Now we show NP SAT Σ P 2. Let L NP SAT - there exists a poly-time NDTM N with oracle access to SAT that decides L. Therefore, x L if and only if there is a sequence of nondeterministic choices (and oracle answers) that has N accept x. Let the choices be c 1,, c m {0, 1}, and the answers be a 1,, a k {0, 1}. So, if N uses these choices and receives a i from the oracle as the answer to the ith query made, then: 1. M goes to q accept 2. All answers are correct. Let φ i be the ith query. Then, the second condition is equivalent to: if a i = 1, then there is an assignment u i such that φ i (u i ) is true, and for a i = 0, then for all assignments v i, φ i (v i ) is false. Therefore, x L if and only if: c 1,, c m, a 1,, a k, u 1, u k v 1, v k such that: 1. N accepts x using c 1,, c m and a 1,, a k, and: 2. If a i = 1 then φ i (u i ) is true for all i, and: 3. If a i = 0 then φ i (v i ) is false for all i. We can wrap the last two statements in another TM, so this implies L Σ P 2. Theorem PH can also be defined as: PH = Π P k. k=1 Proof. We have the simple inclusions: Σ P k ΠP k+1 ΣP k+2. 11

12 3.2 Alternating TMs Ryan Dougherty For PH, we often call the i-th level of PH to be both Σ P i and Π P i. A natural inclination, as has been done before with NP, PSPACE, NL, and the like, is to find a complete problem for PH. However, the following theorem shows that this is most likely not the case. Theorem If there is a PH-complete language, then PH only has a finite number of levels. Proof. Assume there exists a PH-complete language L. By definition, PH = k=1 ΣP k. Therefore, for some j, we have that L Σ P j, and every language in PH is polynomial-time reducible to L. However, any language A that is polynomial-time reducible to some language B Σ P j has that A Σ P j. Therefore, PH Σ P j. This is quite a negative result compared to the complexity classes that we have studied earlier. However, for each i, there are many complete languages for Σ P i and Π P i (see [SU02]): Definition We define the following 2 languages: Σ SAT i = { ψ : ψ is of the form x 1 x 2 Q i x i [φ] where φ contains the variables x 1,, x i, and ψ is true}, Π SAT i = { ψ : ψ is of the form x 1 x 2 Q i x i [φ] where φ contains the variables x 1,, x i, and ψ is true}. Theorem Σ SAT i is Σ P i -complete, and Π SAT i is Π P i -complete (completeness for Σ P i, Π P i is defined very similarly to that of NP, PSPACE, etc.). Proof. Proven in [Wra76], but we give a shorter proof here. It is easy to see that Σ SAT i Σ P i. We take a TM that evaluates the formula φ(x 1,, x n ). We have that φ Σ SAT i if and only if u 1 u 2 Qu i [M(φ, u 1,, u i )] is true, by definition. Any L Σ P i has a poly-time TM M, by definition (note: this does not meant that L P, but rather that the TM M in the definition above runs in poly-time). The computation of M on (x, y 1,, y i ) can be converted into a boolean formula φ x,y1,,y i, as was done in the proof of NP-hardness for 3SAT. Therefore, L p Σ SAT i. The proof for Π SAT i is very similar. 3.2 Alternating TMs Definition Alternating TM s (ATMs) are generalizations of NTMs, in that they behave the same as NTMs, but have an extra feature, that for every non-halting state, the state has a label from {, }. When running on some input, if the state has the label, then the ATM accepts if some transition path from that state accepts; if the state has the label, then the ATM accepts if all transition paths from that state accept. We need to make a distinction between ATMs that can alternate a limited and an unlimited number of times: Definition For all i N, we say L Σ i TIME(T (n)) if there exists a T (n)-time ATM M with initial state that recognizes L. Also, on every input possible and path from the starting configuration, M alternates at most i 1 times (i.e., the state identifier changes, such as or ). We say L Π i TIME(T (n)) with the same definition as above, but the initial state is instead of. 12

13 3.3 Exercises Ryan Dougherty Now we turn our attention to ATMs that can alternate unlimited number of times: Definition We say L ATIME(T (n)) if there is an O(T (n))-time ATM M that accepts x if and only if x L. We set AP = c 0 ATIME(nc ). We need to refine our definition of accepting an input is, because of the quantifiers in the vertices. We define G M,x to be the DAG (directed acyclic graph), called the configuration graph, of M s computation on x, where for any configuration C, there is an edge in G M,x from C to configuration C if and only if C can be obtained from C by 1 step of M s execution. Therefore, accepting an input is defined as follows: 1. Configuration C accept, if M is in state q accept, is labelled with a special name Accept. 2. If the machine is in an state, and there is an edge from C to C, and C is labelled Accept, then we label C Accept. 3. Similarly for C and any reachable configuration C from C, we label C Accept. 4. We say M accepts x if, at the end of recursively applying rules, C start is labelled Accept. So how powerful is AP? It seems that it can be very powerful! However, we show that all AP languages can be simulated in poly-space: Theorem AP = PSPACE. Proof. For the TQBF problem, we can guess values for each variable using an state, and for each variable using a state. The poly-time computation is done at the very end. Therefore, TQBF AP, and since TQBF is PSPACE-complete, we have PSPACE AP. For the other direction, we do a similar proof to NP PSPACE. Let L AP. Then, a game (by making a game tree) is to see whether x L has poly depth in the game tree. If the current node is a leaf (i.e., halting configuration), output the current value. Otherwise, recursively observe sub-game trees corresponding to the current node. We define two players P 2 and P 2. If the state is, output that P 1 is the winner if and only if P 1 wins at least one of the sub-game trees. Otherwise, if the state is, output that P 2 is the winner if and only if P 2 wins at least one of the sub-game trees. The algorithm is correct - it also runs in poly space because it only has poly depth, and each recursion we only need to store the current node s name, which is poly in length. Therefore, L PSPACE. 3.3 Exercises 1. Show that if NP = conp, then PH = NP. Generalize this and show that if Σ P k = ΠP k, then PH = Σ P k. Hint: use an induction argument on the number of applications of an NP oracle, and then show that NP ΣP i = NP NP NP. 2. We defined the polynomial hierarchy in terms of P - what about PSPACE? Look at Σ PSPACE k, Π PSPACE k and determine that PSPACE = k 1 ΣPSPACE k = k 1 ΠPSPACE k. Hint: showing PSPACE = PSPACE PSPACE suffices (why?). 13

14 Boolean Circuits Ryan Dougherty 4 Boolean Circuits We have looked a little at boolean formulas, which take input variables that have values true or false and, through some operations, give a result of either true or false as output. Here, we look at circuits, which are a generalization of formulas. Definition A boolean circuit is a DAG (directed acyclic graph) with a number of source vertices (those with no incoming edges), also called gates, and a sink vertex (with no outgoing edges), also called the output. Vertices are labelled with,, - those with, have fan-in 2 and fan-out 1, and those with have fan-in and fan-out 1. The size of a circuit is the number of vertices it has. The output of a particular input x {0, 1} n is applying each of the vertex rules recursively until the output is reached. Theorem Any circuit with its, vertices having bounded fan-in (i.e., 3) can be converted to an equivalent circuit with these vertices having only fan-in 2. Definition A T (n)-size circuit family is a sequence of circuits (i.e., {C n } n N ), where, for a given function T : N N, has that all of the circuits have size T (n) for all n. We say that L SIZE(T (n)) if there exists a T (n)-size circuit family such that for all x {0, 1} n, x L if and only if C n (x) = 1 (i.e., accepts x). We start with an example: L 1 = {1 n : n N}. Theorem L 1 can be decided by a linear-sized circuit family. Proof. The circuit is simply a tree of gates that computes that of all inputs. Definition We define P/poly to be c SIZE(nc ); in other words, P/poly is the set of languages with poly-size circuit families. Theorem All unary languages are in P/poly. Proof. Implied by Theorem Theorem P P/poly, and the inclusion is strict. Proof. Let M be an oblivious TM, which is one that has its head movement independent of the input contents, but may be dependent on the length of the input. We need to show that for any T (n)-time oblivious TM, we can construct an equivalent O(T (n))-size circuit family. Let x be an input for M, and look at the set of M s configurations: C 1,, C T (n). Encode each configuration by a constant-size binary string. We can compute C i based on the following: x itself, C i 1, C i1,, C ik, where C ij it is in the ith step. is the last step that M s jth head was in the same position as 14

15 4.1 CKT SAT Ryan Dougherty Because we assumed M is oblivious, then i 1,, i k does not depend on x - they only depend on i. Therefore, since we only have a constant number of strings of constant size, we can compute C i with a constant-size circuit. We compose these circuits together so that, on input x, we compute C T (n) on M s last step in its execution on x. We also have a constant size circuit that outputs 1 if and only if C T (n) is an accepting configuration. Since we have O(T (n)) circuits of constant size, we have constructed an equivalent O(T (n))-size circuit to M. For showing strict containment, by Theorem 4.0.6, all undecidable unary languages are in P/poly, which are by definition not in P. 4.1 CKT SAT As we proved the Cook-Levin Theorem from before, now we prove the circuit analog of SAT. CKT SAT is the set of all circuits that have a satisfying assignment to the n input variables. In other words, there exists a w {0, 1} n if and only if C(w) = 1 if C CKT SAT. Theorem CKT SAT is NP-complete. Proof. CKT SAT is clearly in NP (the certificate is w). For hardness, if L NP, then there is a verifier V such that, on input x, verifies if x L in polynomial time. By Theorem 4.0.7, we can convert V into an equivalent circuit C in polynomial time. Therefore, x L if and only if C CKT SAT. 4.2 Advice! We defined P/poly in one way - now we will see another. This will make the fact that all undecidable unary languages are in P/poly to be more clear. Definition The set of languages decidable by T (n)-time TMs with A(n) bits of advice is called DTIME(T (n))/a(n). By this, we mean that for every L DTIME(T (n))/a(n), there is a sequence of strings, {a n }, each of length A(n), and a T (n)-time verifier M, such that it accepts x, a n if and only if x L. In other words, we are given a sequence of strings of length A(n) that are certificates, in a sense. Now we will look at the alternative definition of P/poly: Theorem P/poly can be re-defined to be c,d 0 DTIME(nc )/n d. Proof. By the first definition of P/poly, if L P/poly, then L is computable by a poly-size circuit family {C n }. We can just use the descriptions of each of the C n as an advice string, and the TM is poly-time that, on input x, C, where C is an n-input circuit, outputs C(x). If L is decided by a poly-time TM M with access to the sequence of advice strings {a n }, each of poly-length, then we can use the construction of Theorem to obtain an equivalent poly-size circuit B n for all n (i.e., outputs the same as M does). Let C n be the same as B n but with a n hard-coded as the second input (i.e., fix the inputs). Therefore, {C n } is the desired circuit-family. 15

16 4.3 AC, NC Ryan Dougherty We can actually get a nondeterministic parallel to Theorem 4.0.7: Theorem NP c,d 0 NTIME(nc )/n d, and the inclusion is strict (note NTIME instead of DTIME). Proof. It s easy to see inclusion: set d = 0, and have an NP machine accept a one-bit advice string for every n, and have the machine ignore the advice. For strict inclusion, we know that all unary languages are in DTIME(n)/1 NTIME(n)/1, because we only need one bit of advice for each n in advising the machine to check if 1 n is in the language. However, the following undecidable language is also in there, and therefore is not in NP: UHALT = {1 n : n s binary expansion encodes M, w where M halts on w}. We have proved both of these containments, but how does NP relate to P/poly (instead of the NTIME version)? We actually don t know for sure, as the polynomial hierarchy would collapse: Theorem (Karp-Lipton). If NP P/poly, then PH = Σ P 2. Proof. We just need to show that Π P 2 Σ P 2. If we have NP P/poly, then there is a poly-size circuit family {C n } such that φ SAT if and only if C φ (φ) outputs a satisfying assignment to φ. It suffices to show Π SAT 2 Σ P 2. The structure of an element of Π SAT 2 looks like x 1 x 2 φ(x 1, x 2 ) is true. The x 2 φ(x 1, x 2 ) is true part is exactly SAT. We use nondeterminism (by assumption) to guess the circuit for SAT, thereby removing the quantifier. Let φ u ( ) = φ(u, ). Therefore, x 1 x 2 φ(x 1, x 2 ) is true if and only if C a circuit u φ u (C(φ u )) is true. Therefore, this is in Σ P AC, NC We have dealt with circuits of polynomial size, but what about the other direction? We define a similar analogue of L, NL in space complexity but for circuits. Definition For d N, we say L NC d if L is decided by a family of poly-size, O(log d n)-depth circuits {C n }, and L AC d the same way but the gates have unbounded fan-in. We also define NC = d 0 NCd and AC = d 0 ACd. So how much power does unbounded fan-in give? It only gives enough power to add (at most) a log-factor in depth: Theorem NC d AC d NC d+1. Proof. Unbounded fan-in can be simulated by a O(log n)-depth tree of OR/AND gates. We also show that it is unlikely that any finite level of the NC d hierarchy collapses (just like the PH hierarchy): 16

17 4.4 Parity & Switching Lemma Ryan Dougherty Theorem NC d = NC d+1 NC = NC d. Proof. We can show that NC d = NC d+1 NC d = NC d+2, which gives the result. We want to use the assumption that all circuits with depth O(log d+1 n) and size O(n c ) can be converted to an equivalent circuit of depth O(log d n) and size O(n c ) (with different constants in the O( ) notation). Suppose again that we have a circuit C of depth m log d+2 n (m is a constant) and size O(n c ). Look at all nodes at levels m log d+1 n, 2m log d+1 n,, m log d+2 n. For any constant p, a particular node at the (pm log d+1 n)-th level is the value of a circuit with depth m log d+1 n and size O(n c ), with leaves at the ((p+1)m log d+1 n)-th level. Replace each of the circuits by an equivalent circuit of size m log d n and size O(n c ) (by assumption). The new circuit has depth O(log d+1 n) and size O(n c+c ). Therefore, we get the result. Therefore, it is most likely that each inclusion of NC d AC d NC d+1 is strict (the only one proved, however, is d = 0). So where does NC live in relation to other classes? We can show the following: Theorem NC 1 L. Proof. Given x L with x = n, we construct the nth circuit of the circuit family, and then just evaluate the circuit from the output gate. Since we only need to record the path to the current gate, and any results along the path, and the fact that the circuit has log-depth, we only need log space to do this. As an immediate consequence, we have NC 1 PSPACE. Example We will show how to add 2 n-bit numbers in AC 0 (the result is an n + 1-bit number). Let the two numbers be x = x 1 x n, y = y 1 y n. Let q i = x i y i, p i = x i y i for all i. Initialize c 1 = 0, and have c i = i 1 j=1 (q j i 1 k=j+1 p k) for i 2. Have z i = x i y i c i for all i, and z n+1 = c n+1. The output is z 1 z n+1. We can compute q i, p i at the same time (we only need 1 level). c i can be computed in 2 levels, and z i also in 2 levels. Therefore, we only need 5 levels, which is constant. Therefore, addition is in AC Parity & Switching Lemma Our goal here is to separate AC 0 and NC 1. Definition Let (x 1,, x n ) be the function that computes the sum of the x i mod 2. The goal here (using the Switching Lemma) is to prove: Theorem / AC 0. Håstad proved the following (called the HSL): Theorem (Håstad Switching Lemma). Let f be a k-dnf formula (conjunction of disjunctions where each disjunction has k variables). Let ρ be a random restriction by assigning random values to t randomly selected input bits. Then for all s 2, Pr ρ [f restricted to ρ is not expressible in s-cnf] ( (n t)k10 ) s/2. n 17

18 4.4 Parity & Switching Lemma Ryan Dougherty There are many proofs of HSL, but we use the version as printed in the Arora-Barak text. Lemma HSL implies Theorem Proof. Let C be an AC 0 circuit with a simplification: every vertex has out-degree 1, all gates are right after the input gates, the, gates alternate, and the bottom level has gates with in-degree also 1. It is clear that we can construct this equivalent circuit with a polynomial number of gates. The reasoning is the following: We can remove gates by adding n additional source gates which are the negation of the original n inputs with only a constant-size blowup in the number of vertices, We can reduce the fan-out of each gate to 1 with a constant-size blowup (in terms of the depth d) in the number of vertices. Let the upper bound on the number of gates be n c. We restrict more variables at each step of the computation: if there are n i unrestricted variables at step i, then we restrict n n i of them for the next step. Therefore, n i = 2i n. Let ci = 10c 2 i. With high probability, we prove that at step i, we have a (d i)-depth circuit, with c i fan-in at the lowest level. Suppose that the last level has gates and the penultimate one has gates; it is easy to see that each gate computes a c i -DNF. By HSL, with probability ( ) c 10 ci+1 i i+1 n 10n c for large-enough n, the function after the (i+1)-th iteration is expressible as a c i+1 -CNF (the original statement said not, but this is the same since we subtract the probability from 1). Since, for any CNF formula, the top gate is, we merge the gates with ones above them, which reduces the depth by 1. By the same reasoning, we can use HSL to transform the bottom-level gates, which has the level above it computes a c i -CNF, into a c i+1 -DNF. We apply HSL at most once for each of the n c gates. Since by the union bound: n Pr[ A i ] i=1 n Pr[A i ] i=1 we can achieve, with probability 9 10, a depth-2 circuit with fan-in c d 2 at the lowest level if we apply HSL d 2 times. The result is a k-cnf or DNF formula; we can make this constant by fixing k variables (i.e., ensure the first clause in the DNF is true). Since is not constant under any restriction of < n variables (if we fix n 1 of them, the last variable still does not make it constant), we have Theorem Now we prove the Switching Lemma: Proof. Let a min-term m of function f be a partial assignment that makes f output 1 no matter what the assignment to the other variables are (i.e., the other variables are useless ); 18

19 4.4 Parity & Switching Lemma Ryan Dougherty similarly, a max-term does the same for f but outputs 0 instead. We can see that every term in a k-dnf gives a k-size min-term, and every clause in a k-cnf formula gives a k-size max-term. We can assume w.l.o.g. that the max and min-terms are minimal in size (i.e., no smaller max/min-terms possible). Therefore, a function not expressible in s-cnf must have at least 1 max-term of length s + 1. For restrictions on disjoint sets of variables A, B, let AB be the union of the restrictions. Let R t be the set of all restrictions on t variables for 2t n ( R t = 2 t( n t) ). Let B be the restrictions ρ R t such that f ρ is not expressible in s-cnf. We want to show B is small. We show this by giving a 1-1 mapping between B and S = R t+s {0, 1} l for l O(s log k). We have S = ( n t+s) 2 t+s 2 O(s log k). Therefore, B R t = ( n t+s ) k O(s) ( n t) This is an upper bound on the probability of choosing a bad restriction. We can see from the presentation of HSL that s, k are constants in a sense. Therefore, this function is bounded by n Ω(s). We just need the 1-1 mapping now. Suppose ρ is a bad restriction on f that is k-dnf. No terms of f become 1 under ρ (f under this restriction becomes the constant function otherwise). Therefore, some, but not all (by the same reasoning as for 1), of the terms are 0 under the restriction. Since we have f ρ is not expressible in s-cnf, there exists a max-term m with m s. We can see that f ρm is the constant function 0, and f ρm is nonzero for all sub-restrictions m of m. We want to show that this 1-1 mapping will have ρ ρσ for σ an assignment to variables in m. Therefore, ρσ restricts t + s variables, as we want. Give an order to f s terms, and within those, order them in any order. We have that ρm sets f to be the 0-function (of course, all terms are set to 0 here). Split m into p s sub-restrictions: m 1, m p, where each is defined as: Assume that we have an i such that m 1 m i 1 m. Let t li be the first term in our ordering that is not set to 0 under ρm 1 m i 1 (t li exists since m is a max-term, and m 1 m i 1 is not, since it is a proper subset). Let Y i be the variables of t li set by m but not by ρm 1 m i 1. Since m sets t li to 0, Y i is not empty. Let a i be the restriction of variables that agrees with m on Y i. Also, define σ i to be any restriction of Y i such that t li is not 0. We continue until we assign at least s variables, which happens when we get m 1 m p (also, σ 1 σ p ). If we have assigned more than s variables, change m p (i.e., fewer variables) so that we have exactly s variables. The mapping is ρ (ρσ 1 σ p, c), with c O(s log k) (defined later). Suppose we are given assignment ρσ 1 σ p. We make this assignment to f, and observe which term is t l1 (the first term not fixed to 0 by ρ - since we split m, this property holds for σ 1 and σ 2,, σ p do not assign values to terms in t l1 ). 19

20 4.5 ACC Circuits Ryan Dougherty Let s 1 be the number of variables in m 1. c contains s 1, indices in t l1 of the variables in m 1, and the values that m 1 assigns to those variables (we can see this is sufficient to know). Since each term has k variables, we only need O(s 1 log k) bits to store this. After this is done, we change the restriction from ρσ 1 σ p to ρm 1 σ 2 σ p, and then repeat the process for s 2. We, in this process, figure out what m 1,, m p are. We can now undo them to reconstruct ρ, which shows the mapping is 1-1, and the total storage is O(s log k). What is this s? It is just m i=1 s i. 4.5 ACC Circuits Sometimes we want even more power out of our circuits. This is precisely what the ACC hierarchy is for. Definition Let MOD m be the gate that has m boolean inputs, and outputs 0 if the sum of the inputs is 0( mod m), and 1 otherwise. We have L ACC 0 (m 1,, m k ) if there is a circuit family {C n } with O(1) depth and poly-size, unbounded fan-in, and each has,,, and MOD m1,, MOD mk gates that accepts L. We have ACC 0 to be all of ACC 0 (m 1,, m k ) for k 0 and each m i > 0. Theorem AC 0 ACC 0, with strict inclusion. Proof. Inclusion is obvious. We have strictness because / AC 0 above, and we have ACC 0 because we can just use 1 MOD 2 gate on all inputs. A recent result by Ryan Williams in 2010 shows that NEXP ACC TC Hierarchy Another hierarchy is the TC hierarchy, which instead of M OD gates, has majority gates: Definition A majority gate is one that on input x 1,, x n, outputs 1 if the sum of the x i is n 2, and 0 otherwise (i.e., a majority of the x i are 1). We have TC i to be the class of poly-size, O(log i (n))-depth, unbounded fan-in circuits with majority gates. Also, we have TC to be the union of the TC i. Theorem AC 0 TC 0. Proof. Let an gate be called AND, be called OR, and majority be called MAJ. The value of AND(x 1,, x n ) is exactly MAJ(x 1,, x n, 0,, 0), where we have n zeroes. For OR(x 1,, x n ) it is MAJ(x 1,, x n, 1,, 1), where there are n ones. We can just hardwire the respective constants in, which does not change the depth or size. We can actually define TC 0 in terms of threshold gates, which is called T k (x 1,, x n ), which outputs 1 if the sum of the x i is at least k, and 0 otherwise. Note that this is a generalization of majority gates. We can actually simulate any threshold gate with majority gates: T k (x 1,, x n ) is precisely MAJ(x 1,, x n, 0,, 0) if k n, and there are 2k n 2 zeroes, 20

21 4.6 TC Hierarchy Ryan Dougherty T k (x 1,, x n ) is precisely MAJ(x 1,, x n, 1,, 1) otherwise, and there are n 2k ones. Theorem TC 0 NC 1. Proof. Each of the threshold gates can be substituted for a NC 1 circuit (O(log n)-depth). Since TC 0 circuits have constant depth, the resulting circuit has O(log n) depth and bounded fan-in. Corollary NC = AC = TC. Exercises 1. A language L {0, 1} is sparse if there is a polynomial p such that L {0, 1} n p(n) for every n N. Show that every sparse language is in P/poly. Hint: look at M n L, which is the set of strings in L that have length n. What do we know about M n? What can we do with these strings? 2. Going off the previous question, show that if a sparse language is NP-complete, then P = NP. Hint: this is called Mahaney s Theorem. Look at L, which is a set of pairs (φ, y), where x is an assignment to variables, and φ is a formula with some satisfying assignment that is lexicographically at most x (L is NP-complete). 3. Let L i be the languages decided by O(log i n)-space TMs, and NL i the same way for NTMs. (a) Show that NC i L i for all i. (b) Show that if NL i NC 2i for all i, then we solve a major open problem. Note that this is true for i = 1. 21

22 Randomization Ryan Dougherty 5 Randomization In the previous sections, we have only dealt with (non)-deterministic TMs. However, there are many important classes that relate a TM when using a probability, as we define below: Definition A probabilistic TM (PTM) has two transition functions, and at each step we choose, independently with probability 1, to apply one of the two functions, like a coin 2 flip. We say that a PTM M halts in T (n) time if M halts on all inputs x within T ( x ) steps, regardless of the choices made during execution. We can now define the probabilistic analog of P, which is BPP: Definition A language L is in BPTIME(T (n)) if there exists a PTM M that decides L in time T (n), M halts within T ( x ) steps regardless of choices, and Pr[M(x) = L(x)] 2 3, where L(x) = 1 if x L, and L(x) = 0 if x / L. We define: BPP = c 0 BPTIME(n c ) It is clear that P BPP: the definition of P is the same except without coin flips, and has zero error in accepting/rejecting probability. We also give an alternate definition for BPP: Definition L BPP if there exists a poly-time TM verifier M and a polynomial p such that for all x {0, 1}, Pr[L(M) = L] 2 where M has certificates of size p( x ). 3 Theorem BPP P/poly. This proof will use what is called success amplification - the essential idea is that, given we have polynomial time, we can make successive coin flips to reduce the error bound for BPP to even less error. We do this as follows: For a PTM M that computes a language L with error n c, run M for k independent trials, and accept only if at least half of the trials accept. Define X i = 1 if the i-th trial accepts, and 0 otherwise, and X = Σ k 1X i. Therefore, Pr[X i = 1] n c. Call this result p. E[X] pk for the same reasoning. Call δ = 1 1 2p. Therefore, now we want to find the probability that the trials done above are incorrect, by using Chernoff bounds: Pr[X < (1 δ)pk] exp( (1 1 2p )2 pk 2 ) exp( k 2n 2c ) So, for k a polynomial (i.e., 2n 2c+d ), we can have an exponentially small error bound. Now, onto the proof. 22

23 Randomization Ryan Dougherty Proof. Let L BPP. By success amplification, there is a poly-time TM M such that L(M) = L with error smaller than 2 n. Choose an x {0, 1} n. Therefore, Pr[M computes incorrectly on x] < 1 2 n. We can observe that: Pr[M computes incorrectly on some y {0, 1} n ] Σ x {0,1} n Pr[M computes incorrectly on x] = 1 Therefore, there are some coin flips where M does not make any error on any x {0, 1} n. All we need to do is convert M into a circuit and hardwire these coin flips, which completes the proof. Theorem (Sipser-Gács). BPP Σ P 2 Π P 2. Proof. As before, assume that for an L BPP there is a poly-time TM M such that L(M) = L with error smaller than 2 n. Since BPP is closed under complement, we only need to show BPP Σ P 2. For x {0, 1} n, let S x be the set of certificates for which M accepts x, and let p be the polynomial that is the number of random bits M uses. Therefore, either S x (1 2 n )2 p(n), or S x 2 p(n) n, whether x L or not. What we will show is that we can differentiate between these using only 2 quantifiers. For A {0, 1} p(n) and b {0, 1} p(n), define A + b = {a + b: a A}, where the operation is done mod 2. Let k = p(n)/n + 1. We will show the following: 1. For all S {0, 1} p(n) and S 2 p(n) n and any k vectors {u i } 1 i k with u i {0, 1} p(n), we have that k i=1 (S + u i) {0, 1} p(n) (i.e., if S is small, then we cannot get all of {0, 1} p(n) ). 2. For all S {0, 1} p(n) and S (1 2 n )2 p(n), there exist k vectors {u i } 1 i k with k i=1 (S + u i) = {0, 1} p(n) (i.e., if S is large, then we can get all of {0, 1} p(n) ). In other words, we can establish that x L with only 2 quantifiers. Proving these implies that BPP Σ P 2 Π P 2. For the first proof, we have that S + u i = S, so the union is at most k S < 2 p(n) for large enough n. For the second proof, if the u i are chosen independently randomly, then Pr[ k i=1 (S+u i) = {0, 1} p(n) ] > 0 will imply the proof. Let B r be the event that r / k i=1 (S + u i). This implies that we need to prove that Pr[ B r ] < 1 for some r {0, 1} p(n). Equivalently, we can show that Pr[B r ] < 2 p(n) for all r. Define Br i the event that r / S + u i (equivalently, r + u i / S). We can see that B r = 1 i k Bi r. However, r + u i is uniformly chosen from {0, 1} p(n), so r + u i S with probability 1 2 n. Also, since the Br i are independent for all i, we have Pr[B r ] = (Pr[Br]) i k 2 nk < 2 p(n). It s surprising that we were able to prove the probability of B r being true less than 2 p(n), whereas we only needed it to be less than 1. 23

24 5.1 RP, corp, ZPP Ryan Dougherty 5.1 RP, corp, ZPP Now we look at one-sided error classes, where before we had BPP which was a randomized class with two sides of error (the probability of outputting accept given that x L was strictly less than 1, and outputting accept given x / L was strictly greater than 0). Definition We have L RTIME(T (n)) if there is a T (n)-time PTM M such that if x L, then Pr[M(x) = 1] 1, and if x / L, then Pr[M(x) = 0] = 0. Define RP = 2 c 0 RTIME(nc ). Of course, we can make the constant exponentially close to 1 with success amplification, as we have done with BPP. We can also see that corp has the other side error. In order to define a complexity class with no error for a PTM (i.e., ZPP), we need to define expected running time : Definition Let M be a PTM with input x. Let T M,x be the running time of M on x. Therefore, Pr[T M,x = T ] = p has that M halts on x within T steps with probability p. We define M to have expected running time T(n) if E[T M,x ] T ( x ) for all x. Definition We have L ZTIME(T (n)) when there is a PTM M that runs in expected time O(T (n)) where for all x, M(x) = L(x) (i.e., no error). Define ZPP = c 0 ZTIME(nc ). So how do these 3 randomized classes relate to each other? In fact, we will prove the following: Theorem ZPP = RP corp. We can observe that if x L, then M on x outputs 1 or? (i.e., it does not know); also, if x / L, then M on x outputs 0 or?. If M does not output?, it did not give an incorrect answer (and if it is?, it has a 1 chance of getting the answer correct by choosing randomly). 2 This is important for the proof. Proof. We need to prove that ZPP RP and ZPP corp; we prove the latter since the former is very similar. So suppose L ZPP; by definition, there exists a PTM M that correctly decides that x L or outputs?. Let M be a TM that on x, outputs accept if M on x outputs?; otherwise, it outputs M s output. If x L, we have M on x outputs 1. If x / L, then either M outputs 0 (correctly) or? where M s output is incorrect with probability 1. Therefore, by definition, L corp, 2 which has that ZPP corp. For showing RP corp ZPP, let L RP corp. Therefore (for both classes), there exist two TMs A, B such that if x L, then A(x) = 1, and is incorrect for x / L with probability 1, and if x / L, then B(x) = 0, and is incorrect for x L with probability Ẇe construct a PTM M that does the following: 1. Run A on x, and if A(x) = 0, return the result of x / L. 2. Run B on x, and if B(x) = 1, return the result of x L. 24

25 5.2 Randomized Reductions Ryan Dougherty 3. Otherwise, return?. We see that if M(x) does not return?, the result is correct. Also, we have M(x) returning? with probability 1 by the following argument: assume x L (the x / L case is similar). 2 Therefore, we don t return 1 if and only if B(x) = 0; we also have that B(x) = 0 for x L with probability 1. Therefore, L ZPP, which shows that RP corp ZPP. 2 Therefore, we have ZPP = RP corp. 5.2 Randomized Reductions So we have various randomized classes - what is the analog of a reduction in these? We have deterministic ones that we have seen in NP and other classes. We now define a randomized reduction: Definition We say that A randomly reduces to B (B r C) if there is a poly-time PTM M such that for all x, Pr[B(M(x)) = C(x)] 2 3. Unlike the reductions we have seen before, this reduction is not transitive. To use randomized reductions, we create a class that randomly reduces from the deterministic 3SAT: Definition BP NP = {L: L r 3SAT}. Note that this is quite different than NP. There is even a large difference between 3SAT and 3SAT, as we will show below. Now, we prove the following: Theorem BP NP NP/poly. Proof. Let L BP NP, and M be the PTM that does the reduction from L to 3SAT. For all x, Pr r [L(x) = 3SAT(M(x, r))] 2 for all r over {0, 3 1}poly( x ). As before, we can strengthen the bound to 1 1. For a given x, the number of r with r = m that have 2 x +1 L(x) 3SAT(M(x, r)) is 2 m x 1. For a fixed n, the number of these r over all x is 2 n+m x 1 = 2 m 1, less than the total number of r with r = m. Therefore, there is an r such that for any x with x = n, L(x) = 3SAT(M(x, r )). Construct a non-deterministic circuit C n with input of length n, and certificate y. C n (x, y) computes if y is a satisfying assignment of M(x, r ) (involves a poly-size circuit, done in poly-time). We have r hard-coded for each input of length n. Therefore, x L if and only if M(x, r ) 3SAT if and only if there exists a y such that C n (x, y) = 1, implying that L NP/poly. Theorem If 3SAT BP NP, then PH = Σ P 3. Proof. We just showed BP NP NP/poly. We also showed that if Π P 3 Σ P 3 then PH = Σ P 3. We then just need to prove that 3SAT NP/poly implies that Π P 3 Σ P 3. We use the Π SAT 3 problem, which is Π P 3 -complete, and show it is in Σ P 3. The set of instances in this problem has them of the form x 1 x 2 x 3 φ(a, b, c) = 1. If we fix x 1, x 2, we have an 3SAT instance x 3 φ(a, b, c) = 0. By assumption, there is a poly-size circuit which decides 3SAT with access to a poly-size certificate y. We have w x 1 y such that w is the description of a circuit deciding whether φ(a, b, c) is true. Therefore, this is in Σ P 3. 25

6.840 Language Membership

6.840 Language Membership 6.840 Language Membership Michael Bernstein 1 Undecidable INP Practice final Use for A T M. Build a machine that asks EQ REX and then runs M on w. Query INP. If it s in P, accept. Note that the language

More information

The Polynomial Hierarchy

The Polynomial Hierarchy The Polynomial Hierarchy Slides based on S.Aurora, B.Barak. Complexity Theory: A Modern Approach. Ahto Buldas Ahto.Buldas@ut.ee Motivation..synthesizing circuits is exceedingly difficulty. It is even

More information

Time and space classes

Time and space classes Time and space classes Little Oh (o,

More information

6.045 Final Exam Solutions

6.045 Final Exam Solutions 6.045J/18.400J: Automata, Computability and Complexity Prof. Nancy Lynch, Nati Srebro 6.045 Final Exam Solutions May 18, 2004 Susan Hohenberger Name: Please write your name on each page. This exam is open

More information

MTAT Complexity Theory October 20th-21st, Lecture 7

MTAT Complexity Theory October 20th-21st, Lecture 7 MTAT.07.004 Complexity Theory October 20th-21st, 2011 Lecturer: Peeter Laud Lecture 7 Scribe(s): Riivo Talviste Polynomial hierarchy 1 Turing reducibility From the algorithmics course, we know the notion

More information

FORMAL LANGUAGES, AUTOMATA AND COMPUTABILITY

FORMAL LANGUAGES, AUTOMATA AND COMPUTABILITY 15-453 FORMAL LANGUAGES, AUTOMATA AND COMPUTABILITY THURSDAY APRIL 3 REVIEW for Midterm TUESDAY April 8 Definition: A Turing Machine is a 7-tuple T = (Q, Σ, Γ, δ, q, q accept, q reject ), where: Q is a

More information

CSE 555 HW 5 SAMPLE SOLUTION. Question 1.

CSE 555 HW 5 SAMPLE SOLUTION. Question 1. CSE 555 HW 5 SAMPLE SOLUTION Question 1. Show that if L is PSPACE-complete, then L is NP-hard. Show that the converse is not true. If L is PSPACE-complete, then for all A PSPACE, A P L. We know SAT PSPACE

More information

1 PSPACE-Completeness

1 PSPACE-Completeness CS 6743 Lecture 14 1 Fall 2007 1 PSPACE-Completeness Recall the NP-complete problem SAT: Is a given Boolean formula φ(x 1,..., x n ) satisfiable? The same question can be stated equivalently as: Is the

More information

Introduction to Complexity Theory. Bernhard Häupler. May 2, 2006

Introduction to Complexity Theory. Bernhard Häupler. May 2, 2006 Introduction to Complexity Theory Bernhard Häupler May 2, 2006 Abstract This paper is a short repetition of the basic topics in complexity theory. It is not intended to be a complete step by step introduction

More information

Lecture 6: Oracle TMs, Diagonalization Limits, Space Complexity

Lecture 6: Oracle TMs, Diagonalization Limits, Space Complexity CSE 531: Computational Complexity I Winter 2016 Lecture 6: Oracle TMs, Diagonalization Limits, Space Complexity January 22, 2016 Lecturer: Paul Beame Scribe: Paul Beame Diagonalization enabled us to separate

More information

Umans Complexity Theory Lectures

Umans Complexity Theory Lectures Umans Complexity Theory Lectures Lecture 12: The Polynomial-Time Hierarchy Oracle Turing Machines Oracle Turing Machine (OTM): Deterministic multitape TM M with special query tape special states q?, q

More information

Notes on Computer Theory Last updated: November, Circuits

Notes on Computer Theory Last updated: November, Circuits Notes on Computer Theory Last updated: November, 2015 Circuits Notes by Jonathan Katz, lightly edited by Dov Gordon. 1 Circuits Boolean circuits offer an alternate model of computation: a non-uniform one

More information

Computability and Complexity CISC462, Fall 2018, Space complexity 1

Computability and Complexity CISC462, Fall 2018, Space complexity 1 Computability and Complexity CISC462, Fall 2018, Space complexity 1 SPACE COMPLEXITY This material is covered in Chapter 8 of the textbook. For simplicity, we define the space used by a Turing machine

More information

Lecture 3: Nondeterminism, NP, and NP-completeness

Lecture 3: Nondeterminism, NP, and NP-completeness CSE 531: Computational Complexity I Winter 2016 Lecture 3: Nondeterminism, NP, and NP-completeness January 13, 2016 Lecturer: Paul Beame Scribe: Paul Beame 1 Nondeterminism and NP Recall the definition

More information

Lecture 11: Proofs, Games, and Alternation

Lecture 11: Proofs, Games, and Alternation IAS/PCMI Summer Session 2000 Clay Mathematics Undergraduate Program Basic Course on Computational Complexity Lecture 11: Proofs, Games, and Alternation David Mix Barrington and Alexis Maciel July 31, 2000

More information

Computational Complexity: A Modern Approach. Draft of a book: Dated January 2007 Comments welcome!

Computational Complexity: A Modern Approach. Draft of a book: Dated January 2007 Comments welcome! i Computational Complexity: A Modern Approach Draft of a book: Dated January 2007 Comments welcome! Sanjeev Arora and Boaz Barak Princeton University complexitybook@gmail.com Not to be reproduced or distributed

More information

Lecture 3 (Notes) 1. The book Computational Complexity: A Modern Approach by Sanjeev Arora and Boaz Barak;

Lecture 3 (Notes) 1. The book Computational Complexity: A Modern Approach by Sanjeev Arora and Boaz Barak; Topics in Theoretical Computer Science March 7, 2016 Lecturer: Ola Svensson Lecture 3 (Notes) Scribes: Ola Svensson Disclaimer: These notes were written for the lecturer only and may contain inconsistent

More information

CSC 5170: Theory of Computational Complexity Lecture 9 The Chinese University of Hong Kong 15 March 2010

CSC 5170: Theory of Computational Complexity Lecture 9 The Chinese University of Hong Kong 15 March 2010 CSC 5170: Theory of Computational Complexity Lecture 9 The Chinese University of Hong Kong 15 March 2010 We now embark on a study of computational classes that are more general than NP. As these classes

More information

Complexity Theory VU , SS The Polynomial Hierarchy. Reinhard Pichler

Complexity Theory VU , SS The Polynomial Hierarchy. Reinhard Pichler Complexity Theory Complexity Theory VU 181.142, SS 2018 6. The Polynomial Hierarchy Reinhard Pichler Institut für Informationssysteme Arbeitsbereich DBAI Technische Universität Wien 15 May, 2018 Reinhard

More information

Outline. Complexity Theory EXACT TSP. The Class DP. Definition. Problem EXACT TSP. Complexity of EXACT TSP. Proposition VU 181.

Outline. Complexity Theory EXACT TSP. The Class DP. Definition. Problem EXACT TSP. Complexity of EXACT TSP. Proposition VU 181. Complexity Theory Complexity Theory Outline Complexity Theory VU 181.142, SS 2018 6. The Polynomial Hierarchy Reinhard Pichler Institut für Informationssysteme Arbeitsbereich DBAI Technische Universität

More information

6.045J/18.400J: Automata, Computability and Complexity Final Exam. There are two sheets of scratch paper at the end of this exam.

6.045J/18.400J: Automata, Computability and Complexity Final Exam. There are two sheets of scratch paper at the end of this exam. 6.045J/18.400J: Automata, Computability and Complexity May 20, 2005 6.045 Final Exam Prof. Nancy Lynch Name: Please write your name on each page. This exam is open book, open notes. There are two sheets

More information

Lecture 8. MINDNF = {(φ, k) φ is a CNF expression and DNF expression ψ s.t. ψ k and ψ is equivalent to φ}

Lecture 8. MINDNF = {(φ, k) φ is a CNF expression and DNF expression ψ s.t. ψ k and ψ is equivalent to φ} 6.841 Advanced Complexity Theory February 28, 2005 Lecture 8 Lecturer: Madhu Sudan Scribe: Arnab Bhattacharyya 1 A New Theme In the past few lectures, we have concentrated on non-uniform types of computation

More information

Lecture 4. 1 Circuit Complexity. Notes on Complexity Theory: Fall 2005 Last updated: September, Jonathan Katz

Lecture 4. 1 Circuit Complexity. Notes on Complexity Theory: Fall 2005 Last updated: September, Jonathan Katz Notes on Complexity Theory: Fall 2005 Last updated: September, 2005 Jonathan Katz Lecture 4 1 Circuit Complexity Circuits are directed, acyclic graphs where nodes are called gates and edges are called

More information

Finish K-Complexity, Start Time Complexity

Finish K-Complexity, Start Time Complexity 6.045 Finish K-Complexity, Start Time Complexity 1 Kolmogorov Complexity Definition: The shortest description of x, denoted as d(x), is the lexicographically shortest string such that M(w) halts

More information

Space is a computation resource. Unlike time it can be reused. Computational Complexity, by Fu Yuxi Space Complexity 1 / 44

Space is a computation resource. Unlike time it can be reused. Computational Complexity, by Fu Yuxi Space Complexity 1 / 44 Space Complexity Space is a computation resource. Unlike time it can be reused. Computational Complexity, by Fu Yuxi Space Complexity 1 / 44 Synopsis 1. Space Bounded Computation 2. Logspace Reduction

More information

Lecture 7: Polynomial time hierarchy

Lecture 7: Polynomial time hierarchy Computational Complexity Theory, Fall 2010 September 15 Lecture 7: Polynomial time hierarchy Lecturer: Kristoffer Arnsfelt Hansen Scribe: Mads Chr. Olesen Recall that adding the power of alternation gives

More information

Lecture 17. In this lecture, we will continue our discussion on randomization.

Lecture 17. In this lecture, we will continue our discussion on randomization. ITCS:CCT09 : Computational Complexity Theory May 11, 2009 Lecturer: Jayalal Sarma M.N. Lecture 17 Scribe: Hao Song In this lecture, we will continue our discussion on randomization. 1 BPP and the Polynomial

More information

Lecture 8: Alternatation. 1 Alternate Characterizations of the Polynomial Hierarchy

Lecture 8: Alternatation. 1 Alternate Characterizations of the Polynomial Hierarchy CS 710: Complexity Theory 10/4/2011 Lecture 8: Alternatation Instructor: Dieter van Melkebeek Scribe: Sachin Ravi In this lecture, we continue with our discussion of the polynomial hierarchy complexity

More information

Umans Complexity Theory Lectures

Umans Complexity Theory Lectures Umans Complexity Theory Lectures Lecture 8: Introduction to Randomized Complexity: - Randomized complexity classes, - Error reduction, - in P/poly - Reingold s Undirected Graph Reachability in RL Randomized

More information

DRAFT. Diagonalization. Chapter 4

DRAFT. Diagonalization. Chapter 4 Chapter 4 Diagonalization..the relativized P =?NP question has a positive answer for some oracles and a negative answer for other oracles. We feel that this is further evidence of the difficulty of the

More information

6.045: Automata, Computability, and Complexity (GITCS) Class 15 Nancy Lynch

6.045: Automata, Computability, and Complexity (GITCS) Class 15 Nancy Lynch 6.045: Automata, Computability, and Complexity (GITCS) Class 15 Nancy Lynch Today: More Complexity Theory Polynomial-time reducibility, NP-completeness, and the Satisfiability (SAT) problem Topics: Introduction

More information

Lecture 12: Randomness Continued

Lecture 12: Randomness Continued CS 710: Complexity Theory 2/25/2010 Lecture 12: Randomness Continued Instructor: Dieter van Melkebeek Scribe: Beth Skubak & Nathan Collins In the last lecture we introduced randomized computation in terms

More information

CSE200: Computability and complexity Space Complexity

CSE200: Computability and complexity Space Complexity CSE200: Computability and complexity Space Complexity Shachar Lovett January 29, 2018 1 Space complexity We would like to discuss languages that may be determined in sub-linear space. Lets first recall

More information

Randomized Computation

Randomized Computation Randomized Computation Slides based on S.Aurora, B.Barak. Complexity Theory: A Modern Approach. Ahto Buldas Ahto.Buldas@ut.ee We do not assume anything about the distribution of the instances of the problem

More information

Lecture 2 (Notes) 1. The book Computational Complexity: A Modern Approach by Sanjeev Arora and Boaz Barak;

Lecture 2 (Notes) 1. The book Computational Complexity: A Modern Approach by Sanjeev Arora and Boaz Barak; Topics in Theoretical Computer Science February 29, 2016 Lecturer: Ola Svensson Lecture 2 (Notes) Scribes: Ola Svensson Disclaimer: These notes were written for the lecturer only and may contain inconsistent

More information

Randomness and non-uniformity

Randomness and non-uniformity Randomness and non-uniformity JASS 2006 Course 1: Proofs and Computers Felix Weninger TU München April 2006 Outline Randomized computation 1 Randomized computation 2 Computation with advice Non-uniform

More information

Lecture 20: PSPACE. November 15, 2016 CS 1010 Theory of Computation

Lecture 20: PSPACE. November 15, 2016 CS 1010 Theory of Computation Lecture 20: PSPACE November 15, 2016 CS 1010 Theory of Computation Recall that PSPACE = k=1 SPACE(nk ). We will see that a relationship between time and space complexity is given by: P NP PSPACE = NPSPACE

More information

Circuits. Lecture 11 Uniform Circuit Complexity

Circuits. Lecture 11 Uniform Circuit Complexity Circuits Lecture 11 Uniform Circuit Complexity 1 Recall 2 Recall Non-uniform complexity 2 Recall Non-uniform complexity P/1 Decidable 2 Recall Non-uniform complexity P/1 Decidable NP P/log NP = P 2 Recall

More information

MTAT Complexity Theory October 13th-14th, Lecture 6

MTAT Complexity Theory October 13th-14th, Lecture 6 MTAT.07.004 Complexity Theory October 13th-14th, 2011 Lecturer: Peeter Laud Lecture 6 Scribe(s): Riivo Talviste 1 Logarithmic memory Turing machines working in logarithmic space become interesting when

More information

P = k T IME(n k ) Now, do all decidable languages belong to P? Let s consider a couple of languages:

P = k T IME(n k ) Now, do all decidable languages belong to P? Let s consider a couple of languages: CS 6505: Computability & Algorithms Lecture Notes for Week 5, Feb 8-12 P, NP, PSPACE, and PH A deterministic TM is said to be in SP ACE (s (n)) if it uses space O (s (n)) on inputs of length n. Additionally,

More information

Notes on Complexity Theory Last updated: October, Lecture 6

Notes on Complexity Theory Last updated: October, Lecture 6 Notes on Complexity Theory Last updated: October, 2015 Lecture 6 Notes by Jonathan Katz, lightly edited by Dov Gordon 1 PSPACE and PSPACE-Completeness As in our previous study of N P, it is useful to identify

More information

Lecture 24: Randomized Complexity, Course Summary

Lecture 24: Randomized Complexity, Course Summary 6.045 Lecture 24: Randomized Complexity, Course Summary 1 1/4 1/16 1/4 1/4 1/32 1/16 1/32 Probabilistic TMs 1/16 A probabilistic TM M is a nondeterministic TM where: Each nondeterministic step is called

More information

Introduction to Computational Complexity

Introduction to Computational Complexity Introduction to Computational Complexity George Voutsadakis 1 1 Mathematics and Computer Science Lake Superior State University LSSU Math 400 George Voutsadakis (LSSU) Computational Complexity September

More information

Technische Universität München Summer term 2010 Theoretische Informatik August 2, 2010 Dr. J. Kreiker / Dr. M. Luttenberger, J. Kretinsky SOLUTION

Technische Universität München Summer term 2010 Theoretische Informatik August 2, 2010 Dr. J. Kreiker / Dr. M. Luttenberger, J. Kretinsky SOLUTION Technische Universität München Summer term 2010 Theoretische Informatik August 2, 2010 Dr. J. Kreiker / Dr. M. Luttenberger, J. Kretinsky SOLUTION Complexity Theory Final Exam Please note : If not stated

More information

Polynomial Hierarchy

Polynomial Hierarchy Polynomial Hierarchy A polynomial-bounded version of Kleene s Arithmetic Hierarchy becomes trivial if P = NP. Karp, 1972 Computational Complexity, by Fu Yuxi Polynomial Hierarchy 1 / 44 Larry Stockmeyer

More information

The Cook-Levin Theorem

The Cook-Levin Theorem An Exposition Sandip Sinha Anamay Chaturvedi Indian Institute of Science, Bangalore 14th November 14 Introduction Deciding a Language Let L {0, 1} be a language, and let M be a Turing machine. We say M

More information

: On the P vs. BPP problem. 30/12/2016 Lecture 11

: On the P vs. BPP problem. 30/12/2016 Lecture 11 03684155: On the P vs. BPP problem. 30/12/2016 Lecture 11 Promise problems Amnon Ta-Shma and Dean Doron 1 Definitions and examples In a promise problem, we are interested in solving a problem only on a

More information

Lecture 9: Polynomial-Time Hierarchy, Time-Space Tradeoffs

Lecture 9: Polynomial-Time Hierarchy, Time-Space Tradeoffs CSE 531: Computational Complexity I Winter 2016 Lecture 9: Polynomial-Time Hierarchy, Time-Space Tradeoffs Feb 3, 2016 Lecturer: Paul Beame Scribe: Paul Beame 1 The Polynomial-Time Hierarchy Last time

More information

1 Circuit Complexity. CS 6743 Lecture 15 1 Fall Definitions

1 Circuit Complexity. CS 6743 Lecture 15 1 Fall Definitions CS 6743 Lecture 15 1 Fall 2007 1 Circuit Complexity 1.1 Definitions A Boolean circuit C on n inputs x 1,..., x n is a directed acyclic graph (DAG) with n nodes of in-degree 0 (the inputs x 1,..., x n ),

More information

: On the P vs. BPP problem. 30/12/2016 Lecture 12

: On the P vs. BPP problem. 30/12/2016 Lecture 12 03684155: On the P vs. BPP problem. 30/12/2016 Lecture 12 Time Hierarchy Theorems Amnon Ta-Shma and Dean Doron 1 Diagonalization arguments Throughout this lecture, for a TM M, we denote M t to be the machine

More information

NP, polynomial-time mapping reductions, and NP-completeness

NP, polynomial-time mapping reductions, and NP-completeness NP, polynomial-time mapping reductions, and NP-completeness In the previous lecture we discussed deterministic time complexity, along with the time-hierarchy theorem, and introduced two complexity classes:

More information

Theory of Computation. Ch.8 Space Complexity. wherein all branches of its computation halt on all

Theory of Computation. Ch.8 Space Complexity. wherein all branches of its computation halt on all Definition 8.1 Let M be a deterministic Turing machine, DTM, that halts on all inputs. The space complexity of M is the function f : N N, where f(n) is the maximum number of tape cells that M scans on

More information

De Morgan s a Laws. De Morgan s laws say that. (φ 1 φ 2 ) = φ 1 φ 2, (φ 1 φ 2 ) = φ 1 φ 2.

De Morgan s a Laws. De Morgan s laws say that. (φ 1 φ 2 ) = φ 1 φ 2, (φ 1 φ 2 ) = φ 1 φ 2. De Morgan s a Laws De Morgan s laws say that (φ 1 φ 2 ) = φ 1 φ 2, (φ 1 φ 2 ) = φ 1 φ 2. Here is a proof for the first law: φ 1 φ 2 (φ 1 φ 2 ) φ 1 φ 2 0 0 1 1 0 1 1 1 1 0 1 1 1 1 0 0 a Augustus DeMorgan

More information

Theory of Computation Time Complexity

Theory of Computation Time Complexity Theory of Computation Time Complexity Bow-Yaw Wang Academia Sinica Spring 2012 Bow-Yaw Wang (Academia Sinica) Time Complexity Spring 2012 1 / 59 Time for Deciding a Language Let us consider A = {0 n 1

More information

Third Year Computer Science and Engineering, 6 th Semester

Third Year Computer Science and Engineering, 6 th Semester Department of Computer Science and Engineering Third Year Computer Science and Engineering, 6 th Semester Subject Code & Name: THEORY OF COMPUTATION UNIT-I CHURCH-TURING THESIS 1. What is a Turing Machine?

More information

Non-Deterministic Time

Non-Deterministic Time Non-Deterministic Time Master Informatique 2016 1 Non-Deterministic Time Complexity Classes Reminder on DTM vs NDTM [Turing 1936] (q 0, x 0 ) (q 1, x 1 ) Deterministic (q n, x n ) Non-Deterministic (q

More information

Computational Complexity Theory

Computational Complexity Theory Computational Complexity Theory Marcus Hutter Canberra, ACT, 0200, Australia http://www.hutter1.net/ Assumed Background Preliminaries Turing Machine (TM) Deterministic Turing Machine (DTM) NonDeterministic

More information

Introduction to Languages and Computation

Introduction to Languages and Computation Introduction to Languages and Computation George Voutsadakis 1 1 Mathematics and Computer Science Lake Superior State University LSSU Math 400 George Voutsadakis (LSSU) Languages and Computation July 2014

More information

SOLUTION: SOLUTION: SOLUTION:

SOLUTION: SOLUTION: SOLUTION: Convert R and S into nondeterministic finite automata N1 and N2. Given a string s, if we know the states N1 and N2 may reach when s[1...i] has been read, we are able to derive the states N1 and N2 may

More information

Computational Complexity: A Modern Approach

Computational Complexity: A Modern Approach 1 Computational Complexity: A Modern Approach Draft of a book in preparation: Dated December 2004 Comments welcome! Sanjeev Arora Not to be reproduced or distributed without the author s permission I am

More information

U.C. Berkeley CS278: Computational Complexity Professor Luca Trevisan August 30, Notes for Lecture 1

U.C. Berkeley CS278: Computational Complexity Professor Luca Trevisan August 30, Notes for Lecture 1 U.C. Berkeley CS278: Computational Complexity Handout N1 Professor Luca Trevisan August 30, 2004 Notes for Lecture 1 This course assumes CS170, or equivalent, as a prerequisite. We will assume that the

More information

Review of Basic Computational Complexity

Review of Basic Computational Complexity Lecture 1 Review of Basic Computational Complexity March 30, 2004 Lecturer: Paul Beame Notes: Daniel Lowd 1.1 Preliminaries 1.1.1 Texts There is no one textbook that covers everything in this course. Some

More information

1 AC 0 and Håstad Switching Lemma

1 AC 0 and Håstad Switching Lemma princeton university cos 522: computational complexity Lecture 19: Circuit complexity Lecturer: Sanjeev Arora Scribe:Tony Wirth Complexity theory s Waterloo As we saw in an earlier lecture, if PH Σ P 2

More information

Complete problems for classes in PH, The Polynomial-Time Hierarchy (PH) oracle is like a subroutine, or function in

Complete problems for classes in PH, The Polynomial-Time Hierarchy (PH) oracle is like a subroutine, or function in Oracle Turing Machines Nondeterministic OTM defined in the same way (transition relation, rather than function) oracle is like a subroutine, or function in your favorite PL but each call counts as single

More information

CS 151 Complexity Theory Spring Solution Set 5

CS 151 Complexity Theory Spring Solution Set 5 CS 151 Complexity Theory Spring 2017 Solution Set 5 Posted: May 17 Chris Umans 1. We are given a Boolean circuit C on n variables x 1, x 2,..., x n with m, and gates. Our 3-CNF formula will have m auxiliary

More information

CSE 105 THEORY OF COMPUTATION

CSE 105 THEORY OF COMPUTATION CSE 105 THEORY OF COMPUTATION Fall 2016 http://cseweb.ucsd.edu/classes/fa16/cse105-abc/ Logistics HW7 due tonight Thursday's class: REVIEW Final exam on Thursday Dec 8, 8am-11am, LEDDN AUD Note card allowed

More information

The space complexity of a standard Turing machine. The space complexity of a nondeterministic Turing machine

The space complexity of a standard Turing machine. The space complexity of a nondeterministic Turing machine 298 8. Space Complexity The space complexity of a standard Turing machine M = (Q,,,, q 0, accept, reject) on input w is space M (w) = max{ uav : q 0 w M u q av, q Q, u, a, v * } The space complexity of

More information

Lecture 23: More PSPACE-Complete, Randomized Complexity

Lecture 23: More PSPACE-Complete, Randomized Complexity 6.045 Lecture 23: More PSPACE-Complete, Randomized Complexity 1 Final Exam Information Who: You On What: Everything through PSPACE (today) With What: One sheet (double-sided) of notes are allowed When:

More information

Show that the following problems are NP-complete

Show that the following problems are NP-complete Show that the following problems are NP-complete April 7, 2018 Below is a list of 30 exercises in which you are asked to prove that some problem is NP-complete. The goal is to better understand the theory

More information

Lecture 8: Complete Problems for Other Complexity Classes

Lecture 8: Complete Problems for Other Complexity Classes IAS/PCMI Summer Session 2000 Clay Mathematics Undergraduate Program Basic Course on Computational Complexity Lecture 8: Complete Problems for Other Complexity Classes David Mix Barrington and Alexis Maciel

More information

COMPLEXITY THEORY. PSPACE = SPACE(n k ) k N. NPSPACE = NSPACE(n k ) 10/30/2012. Space Complexity: Savitch's Theorem and PSPACE- Completeness

COMPLEXITY THEORY. PSPACE = SPACE(n k ) k N. NPSPACE = NSPACE(n k ) 10/30/2012. Space Complexity: Savitch's Theorem and PSPACE- Completeness 15-455 COMPLEXITY THEORY Space Complexity: Savitch's Theorem and PSPACE- Completeness October 30,2012 MEASURING SPACE COMPLEXITY FINITE STATE CONTROL I N P U T 1 2 3 4 5 6 7 8 9 10 We measure space complexity

More information

Theory of Computation Space Complexity. (NTU EE) Space Complexity Fall / 1

Theory of Computation Space Complexity. (NTU EE) Space Complexity Fall / 1 Theory of Computation Space Complexity (NTU EE) Space Complexity Fall 2016 1 / 1 Space Complexity Definition 1 Let M be a TM that halts on all inputs. The space complexity of M is f : N N where f (n) is

More information

6.045: Automata, Computability, and Complexity (GITCS) Class 17 Nancy Lynch

6.045: Automata, Computability, and Complexity (GITCS) Class 17 Nancy Lynch 6.045: Automata, Computability, and Complexity (GITCS) Class 17 Nancy Lynch Today Probabilistic Turing Machines and Probabilistic Time Complexity Classes Now add a new capability to standard TMs: random

More information

Complexity Theory. Knowledge Representation and Reasoning. November 2, 2005

Complexity Theory. Knowledge Representation and Reasoning. November 2, 2005 Complexity Theory Knowledge Representation and Reasoning November 2, 2005 (Knowledge Representation and Reasoning) Complexity Theory November 2, 2005 1 / 22 Outline Motivation Reminder: Basic Notions Algorithms

More information

Final exam study sheet for CS3719 Turing machines and decidability.

Final exam study sheet for CS3719 Turing machines and decidability. Final exam study sheet for CS3719 Turing machines and decidability. A Turing machine is a finite automaton with an infinite memory (tape). Formally, a Turing machine is a 6-tuple M = (Q, Σ, Γ, δ, q 0,

More information

Chapter 2 : Time complexity

Chapter 2 : Time complexity Dr. Abhijit Das, Chapter 2 : Time complexity In this chapter we study some basic results on the time complexities of computational problems. concentrate our attention mostly on polynomial time complexities,

More information

-bit integers are all in ThC. Th The following problems are complete for PSPACE NPSPACE ATIME QSAT, GEOGRAPHY, SUCCINCT REACH.

-bit integers are all in ThC. Th The following problems are complete for PSPACE NPSPACE ATIME QSAT, GEOGRAPHY, SUCCINCT REACH. CMPSCI 601: Recall From Last Time Lecture 26 Theorem: All CFL s are in sac. Facts: ITADD, MULT, ITMULT and DIVISION on -bit integers are all in ThC. Th The following problems are complete for PSPACE NPSPACE

More information

Week 3: Reductions and Completeness

Week 3: Reductions and Completeness Computational Complexity Theory Summer HSSP 2018 Week 3: Reductions and Completeness Dylan Hendrickson MIT Educational Studies Program 3.1 Reductions Suppose I know how to solve some problem quickly. How

More information

6.841/18.405J: Advanced Complexity Wednesday, February 12, Lecture Lecture 3

6.841/18.405J: Advanced Complexity Wednesday, February 12, Lecture Lecture 3 6.841/18.405J: Advanced Complexity Wednesday, February 12, 2003 Lecture Lecture 3 Instructor: Madhu Sudan Scribe: Bobby Kleinberg 1 The language MinDNF At the end of the last lecture, we introduced the

More information

Computational Complexity: A Modern Approach. Draft of a book: Dated January 2007 Comments welcome!

Computational Complexity: A Modern Approach. Draft of a book: Dated January 2007 Comments welcome! i Computational Complexity: A Modern Approach Draft of a book: Dated January 2007 Comments welcome! Sanjeev Arora and Boaz Barak Princeton University complexitybook@gmail.com Not to be reproduced or distributed

More information

COL 352 Introduction to Automata and Theory of Computation Major Exam, Sem II , Max 80, Time 2 hr. Name Entry No. Group

COL 352 Introduction to Automata and Theory of Computation Major Exam, Sem II , Max 80, Time 2 hr. Name Entry No. Group COL 352 Introduction to Automata and Theory of Computation Major Exam, Sem II 2015-16, Max 80, Time 2 hr Name Entry No. Group Note (i) Write your answers neatly and precisely in the space provided with

More information

Notes on Complexity Theory Last updated: November, Lecture 10

Notes on Complexity Theory Last updated: November, Lecture 10 Notes on Complexity Theory Last updated: November, 2015 Lecture 10 Notes by Jonathan Katz, lightly edited by Dov Gordon. 1 Randomized Time Complexity 1.1 How Large is BPP? We know that P ZPP = RP corp

More information

Undecidable Problems and Reducibility

Undecidable Problems and Reducibility University of Georgia Fall 2014 Reducibility We show a problem decidable/undecidable by reducing it to another problem. One type of reduction: mapping reduction. Definition Let A, B be languages over Σ.

More information

Theory of Computation CS3102 Spring 2015 A tale of computers, math, problem solving, life, love and tragic death

Theory of Computation CS3102 Spring 2015 A tale of computers, math, problem solving, life, love and tragic death Theory of Computation CS3102 Spring 2015 A tale of computers, math, problem solving, life, love and tragic death Robbie Hott www.cs.virginia.edu/~jh2jf Department of Computer Science University of Virginia

More information

an efficient procedure for the decision problem. We illustrate this phenomenon for the Satisfiability problem.

an efficient procedure for the decision problem. We illustrate this phenomenon for the Satisfiability problem. 1 More on NP In this set of lecture notes, we examine the class NP in more detail. We give a characterization of NP which justifies the guess and verify paradigm, and study the complexity of solving search

More information

Notes for Lecture 3... x 4

Notes for Lecture 3... x 4 Stanford University CS254: Computational Complexity Notes 3 Luca Trevisan January 18, 2012 Notes for Lecture 3 In this lecture we introduce the computational model of boolean circuits and prove that polynomial

More information

Lecture 16: Time Complexity and P vs NP

Lecture 16: Time Complexity and P vs NP 6.045 Lecture 16: Time Complexity and P vs NP 1 Time-Bounded Complexity Classes Definition: TIME(t(n)) = { L there is a Turing machine M with time complexity O(t(n)) so that L = L(M) } = { L L is a language

More information

Space and Nondeterminism

Space and Nondeterminism CS 221 Computational Complexity, Lecture 5 Feb 6, 2018 Space and Nondeterminism Instructor: Madhu Sudan 1 Scribe: Yong Wook Kwon Topic Overview Today we ll talk about space and non-determinism. For some

More information

Polynomial Time Computation. Topics in Logic and Complexity Handout 2. Nondeterministic Polynomial Time. Succinct Certificates.

Polynomial Time Computation. Topics in Logic and Complexity Handout 2. Nondeterministic Polynomial Time. Succinct Certificates. 1 2 Topics in Logic and Complexity Handout 2 Anuj Dawar MPhil Advanced Computer Science, Lent 2010 Polynomial Time Computation P = TIME(n k ) k=1 The class of languages decidable in polynomial time. The

More information

CS154, Lecture 17: conp, Oracles again, Space Complexity

CS154, Lecture 17: conp, Oracles again, Space Complexity CS154, Lecture 17: conp, Oracles again, Space Complexity Definition: conp = { L L NP } What does a conp computation look like? In NP algorithms, we can use a guess instruction in pseudocode: Guess string

More information

Computer Sciences Department

Computer Sciences Department Computer Sciences Department 1 Reference Book: INTRODUCTION TO THE THEORY OF COMPUTATION, SECOND EDITION, by: MICHAEL SIPSER Computer Sciences Department 3 ADVANCED TOPICS IN C O M P U T A B I L I T Y

More information

q FORMAL LANGUAGES, AUTOMATA AND COMPUTABILITY. FLAC (15-453) Spring L. Blum. REVIEW for Midterm 2 TURING MACHINE

q FORMAL LANGUAGES, AUTOMATA AND COMPUTABILITY. FLAC (15-453) Spring L. Blum. REVIEW for Midterm 2 TURING MACHINE FLAC (15-45) Spring 214 - L. Blum THURSDAY APRIL 15-45 REVIEW or Midterm 2 FORMAL LANGUAGES, AUTOMATA AND COMPUTABILITY TUESDAY April 8 Deinition: A Turing Machine is a 7-tuple T = (Q, Σ, Γ,, q, q accept,

More information

INAPPROX APPROX PTAS. FPTAS Knapsack P

INAPPROX APPROX PTAS. FPTAS Knapsack P CMPSCI 61: Recall From Last Time Lecture 22 Clique TSP INAPPROX exists P approx alg for no ε < 1 VertexCover MAX SAT APPROX TSP some but not all ε< 1 PTAS all ε < 1 ETSP FPTAS Knapsack P poly in n, 1/ε

More information

CS154, Lecture 13: P vs NP

CS154, Lecture 13: P vs NP CS154, Lecture 13: P vs NP The EXTENDED Church-Turing Thesis Everyone s Intuitive Notion of Efficient Algorithms Polynomial-Time Turing Machines More generally: TM can simulate every reasonable model of

More information

Space Complexity. Huan Long. Shanghai Jiao Tong University

Space Complexity. Huan Long. Shanghai Jiao Tong University Space Complexity Huan Long Shanghai Jiao Tong University Acknowledgements Part of the slides comes from a similar course in Fudan University given by Prof. Yijia Chen. http://basics.sjtu.edu.cn/ chen/

More information

NP-Completeness and Boolean Satisfiability

NP-Completeness and Boolean Satisfiability NP-Completeness and Boolean Satisfiability Mridul Aanjaneya Stanford University August 14, 2012 Mridul Aanjaneya Automata Theory 1/ 49 Time-Bounded Turing Machines A Turing Machine that, given an input

More information

CSC 5170: Theory of Computational Complexity Lecture 4 The Chinese University of Hong Kong 1 February 2010

CSC 5170: Theory of Computational Complexity Lecture 4 The Chinese University of Hong Kong 1 February 2010 CSC 5170: Theory of Computational Complexity Lecture 4 The Chinese University of Hong Kong 1 February 2010 Computational complexity studies the amount of resources necessary to perform given computations.

More information

Announcements. Friday Four Square! Problem Set 8 due right now. Problem Set 9 out, due next Friday at 2:15PM. Did you lose a phone in my office?

Announcements. Friday Four Square! Problem Set 8 due right now. Problem Set 9 out, due next Friday at 2:15PM. Did you lose a phone in my office? N P NP Completeness Announcements Friday Four Square! Today at 4:15PM, outside Gates. Problem Set 8 due right now. Problem Set 9 out, due next Friday at 2:15PM. Explore P, NP, and their connection. Did

More information

Lecture 7: The Polynomial-Time Hierarchy. 1 Nondeterministic Space is Closed under Complement

Lecture 7: The Polynomial-Time Hierarchy. 1 Nondeterministic Space is Closed under Complement CS 710: Complexity Theory 9/29/2011 Lecture 7: The Polynomial-Time Hierarchy Instructor: Dieter van Melkebeek Scribe: Xi Wu In this lecture we first finish the discussion of space-bounded nondeterminism

More information

V Honors Theory of Computation

V Honors Theory of Computation V22.0453-001 Honors Theory of Computation Problem Set 3 Solutions Problem 1 Solution: The class of languages recognized by these machines is the exactly the class of regular languages, thus this TM variant

More information