Better Time-Space Lower Bounds for SAT and Related Problems
|
|
- Norah Hunt
- 5 years ago
- Views:
Transcription
1 Better Time-Space Lower Bounds for SAT and Related Problems Ryan Williams Carnegie Mellon University June 12,
2 Introduction Few super-linear time lower bounds known for natural problems in NP (Existing ones generally use a restricted computational model) 1
3 Introduction Few super-linear time lower bounds known for natural problems in NP (Existing ones generally use a restricted computational model) We will show time lower bounds for the SAT problem, on random-access machines using n o(1) space These lower bounds carry over to MAX-SAT, Hamilton Path, Vertex Cover, etc. [Raz and Van Melkebeek] 1-a
4 Introduction Few super-linear time lower bounds known for natural problems in NP (Existing ones generally use a restricted computational model) We will show time lower bounds for the SAT problem, on random-access machines using n o(1) space These lower bounds carry over to MAX-SAT, Hamilton Path, Vertex Cover, etc. [Raz and Van Melkebeek] Previous time lower bounds for SAT (assuming n o(1) space): ω(n) [Kannan 84] 1-b
5 Introduction Few super-linear time lower bounds known for natural problems in NP (Existing ones generally use a restricted computational model) We will show time lower bounds for the SAT problem, on random-access machines using n o(1) space These lower bounds carry over to MAX-SAT, Hamilton Path, Vertex Cover, etc. [Raz and Van Melkebeek] Previous time lower bounds for SAT (assuming n o(1) space): ω(n) [Kannan 84] Ω(n 1+ε ) for some ε > 0 [Fortnow 97] 1-c
6 Introduction Few super-linear time lower bounds known for natural problems in NP (Existing ones generally use a restricted computational model) We will show time lower bounds for the SAT problem, on random-access machines using n o(1) space These lower bounds carry over to MAX-SAT, Hamilton Path, Vertex Cover, etc. [Raz and Van Melkebeek] Previous time lower bounds for SAT (assuming n o(1) space): ω(n) [Kannan 84] Ω(n 1+ε ) for some ε > 0 [Fortnow 97] Ω(n 2 ε ) for all ε > 0 [Lipton and Viglas 99] 1-d
7 Introduction Few super-linear time lower bounds known for natural problems in NP (Existing ones generally use a restricted computational model) We will show time lower bounds for the SAT problem, on random-access machines using n o(1) space These lower bounds carry over to MAX-SAT, Hamilton Path, Vertex Cover, etc. [Raz and Van Melkebeek] Previous time lower bounds for SAT (assuming n o(1) space): ω(n) [Kannan 84] Ω(n 1+ε ) for some ε > 0 [Fortnow 97] Ω(n 2 ε ) for all ε > 0 [Lipton and Viglas 99] Ω(n φ ε ) where φ = [Fortnow and Van Melkebeek 00] 1-e
8 2 and φ are nice constants... Our Main Result The constant of our work will be (larger, but) not-so-nice. 2
9 2 and φ are nice constants... Our Main Result The constant of our work will be (larger, but) not-so-nice. Theorem: For all k, SAT requires n Υ k time (infinitely often) on a random-access machine using n o(1) workspace, where Υ k is the positive solution in (1, 2) to Υ 3 k (Υ k 1) = k 2 k+3 ( (k 1) 2 k+3 ). 2-a
10 2 and φ are nice constants... Our Main Result The constant of our work will be (larger, but) not-so-nice. Theorem: For all k, SAT requires n Υ k time (infinitely often) on a random-access machine using n o(1) workspace, where Υ k is the positive solution in (1, 2) to Υ 3 k (Υ k 1) = k 2 k+3 ( (k 1) 2 k+3 ). Define Υ := lim k Υ k. Then: n Υ ε lower bound. 2-b
11 2 and φ are nice constants... Our Main Result The constant of our work will be (larger, but) not-so-nice. Theorem: For all k, SAT requires n Υ k time (infinitely often) on a random-access machine using n o(1) workspace, where Υ k is the positive solution in (1, 2) to Υ 3 k (Υ k 1) = k 2 k+3 ( (k 1) 2 k+3 ). Define Υ := lim k Υ k. Then: n Υ ε lower bound. (Note: the Υ stands for Ugly ) 2-c
12 2 and φ are nice constants... Our Main Result The constant of our work will be (larger, but) not-so-nice. Theorem: For all k, SAT requires n Υ k time (infinitely often) on a random-access machine using n o(1) workspace, where Υ k is the positive solution in (1, 2) to Υ 3 k (Υ k 1) = k 2 k+3 ( (k 1) 2 k+3 ). Define Υ := lim k Υ k. Then: n Υ ε lower bound. (Note: the Υ stands for Ugly ) However, Υ , so we ll present the result with 3. 2-d
13 Points About The Method We Use The theorem says for any sufficiently restricted machine, there is an infinite set of SAT instances it cannot solve correctly We will not construct such a set of instances for every machine! Proof is by contradiction: it would be absurd, if such a machine could solve SAT almost everywhere Ours and the above cited methods use artificial computational models (alternating machines) to prove lower bounds for explicit problems in a realistic model 3
14 Outline Preliminaries and Proof Strategy A Speed-Up Theorem (small-space computations can be accelerated using alternation) A Slow-Down Lemma (NTIME can be efficiently simulated implies Σ k TIME can be efficiently simulated with some slow-down) Lipton and Viglas n 2 Lower Bound (the starting point for our approach) Our Inductive Argument (how to derive a better bound from Lipton-Viglas) From n 1.66 to n (a subtle argument that squeezes more from the induction) 4
15 Preliminaries: Two possibly obscure complexity classes DTISP[t, s] is deterministic time t and space s, simultaneously (Note DTISP[t, s] DTIME[t] SPACE[s] in general) We will be looking at DTISP[n k,n o(1) ] for k 1. NQL := c 0 NTIME[n(log n)c ] = NTIME[n poly(log n)] The NQL stands for nondeterministic quasi-linear time 5
16 Preliminaries: SAT Facts Satisfiability (SAT) is not only NP-complete, but also: Theorem: [Cook 85, Gurevich and Shelah 89, Tourlakis 00] SAT is NQL-complete, under reductions doable in O(n poly(log n)) time and O(log n) space (simultaneously). Moreover the ith bit of the reduction can be computed in O(poly(log n)) time. Let D be closed under quasi-linear time, logspace reductions. 6
17 Preliminaries: SAT Facts Satisfiability (SAT) is not only NP-complete, but also: Theorem: [Cook 85, Gurevich and Shelah 89, Tourlakis 00] SAT is NQL-complete, under reductions doable in O(n poly(log n)) time and O(log n) space (simultaneously). Moreover the ith bit of the reduction can be computed in O(poly(log n)) time. Let D be closed under quasi-linear time, logspace reductions. Corollary: If NTIME[n] D, then SAT / D. If one can show NTIME[n] is not contained in some D, then one can name an explicit problem (SAT) not in D (modulo polylog factors) 6-a
18 Preliminaries: Some Hierarchy Theorems For reasonable t(n) n, NTIME[t] contime[o(t)]. Furthermore, for integers k 1, Σ k TIME[t] Π k TIME[o(t)]. So, there s a tight time hierarchy within levels of the polynomial hierarchy. 7
19 Proof Strategy Show if SAT has a sufficiently good algorithm, then one contradicts a hierarchy theorem. 8
20 Proof Strategy Show if SAT has a sufficiently good algorithm, then one contradicts a hierarchy theorem. Strategy of Prior work: 1. Show that DTISP[n c,n o(1) ] can be sped-up when simulated on an alternating machine 2. Show that NTIME[n] DTISP[n c,n o(1) ] allows those alternations to be removed without much slow-down 3. Contradict a hierarchy theorem for small c 8-a
21 Proof Strategy Show if SAT has a sufficiently good algorithm, then one contradicts a hierarchy theorem. Strategy of Prior work: 1. Show that DTISP[n c,n o(1) ] can be sped-up when simulated on an alternating machine 2. Show that NTIME[n] DTISP[n c,n o(1) ] allows those alternations to be removed without much slow-down 3. Contradict a hierarchy theorem for small c Our proof will use the Σ k time versus Π k time hierarchy, for all k 8-b
22 Outline Preliminaries A Speed-Up Theorem A Slow-Down Lemma Lipton and Viglas n 2 Lower Bound Our Inductive Argument From n 1.66 to n
23 A Speed-Up Theorem (Trading Time for Alternations) Let: t(n) = n c for rational c 1, s(n) be n o(1), and k 2 be an integer. Theorem: [Fortnow and Van Melkebeek 00] [Kannan 83] DTISP[t,s] Σ k TIME[t 1/k+o(1) ] Π k TIME[t 1/k+o(1) ]. 10
24 A Speed-Up Theorem (Trading Time for Alternations) Let: t(n) = n c for rational c 1, s(n) be n o(1), and k 2 be an integer. Theorem: [Fortnow and Van Melkebeek 00] [Kannan 83] DTISP[t,s] Σ k TIME[t 1/k+o(1) ] Π k TIME[t 1/k+o(1) ]. That is, for any machine M running in time t and using small workspace, there is an alternating machine M that makes k alternations and takes roughly k t time. Moreover, M can start in either an existential or a universal state 10-a
25 Proof of the speed-up theorem Let x be input, M be the small space machine to simulate Goal: Write a clever sentence in first-order logic with k (alternating) quantifiers that is equivalent to M(x) accepting 11
26 Proof of the speed-up theorem Let x be input, M be the small space machine to simulate Goal: Write a clever sentence in first-order logic with k (alternating) quantifiers that is equivalent to M(x) accepting Let C j denote configuration of M(x) after jth step: bit string encoding head positions, workspace contents, finite control By space assumption on M, C j n o(1) 11-a
27 Proof of the speed-up theorem Let x be input, M be the small space machine to simulate Goal: Write a clever sentence in first-order logic with k (alternating) quantifiers that is equivalent to M(x) accepting Let C j denote configuration of M(x) after jth step: bit string encoding head positions, workspace contents, finite control By space assumption on M, C j n o(1) M(x) accepts iff there is a sequence C 1,C 2,...,C t where C 1 is the initial configuration, C t is in accept state For all i, C i leads to C i+1 in one step of M on input x. 11-b
28 Proof of speed-up theorem: The case k = 2 M(x) accepts iff ( C 0,C t,c 2 t,...,c t) ( i {1,..., t}) [C i t leads to C (i+1) t in t steps, C 0 is initial, C t is accepting] 12
29 Proof of speed-up theorem: The case k = 2 M(x) accepts iff ( C 0,C t,c 2 t,...,c t) ( i {1,..., t}) [C i t leads to C (i+1) t in t steps, C 0 is initial, C t is accepting] Runtime on an alternating machine: takes O( t s) = t 1/2+o(1) time to write down the C j s takes O(log t) time to write down i [ ] takes O( t s) deterministic time to check Two alternations, square root speedup 12-a
30 Proof of speed-up theorem: The k = 3 case, first attempt For k = 2, we had ( C 0,C t,c 2 t,...,c t) ( i {0, 1,..., t}) [C i t leads to C (i+1) t in t steps, C 0 initial, C t accepting] 13
31 Proof of speed-up theorem: The k = 3 case, first attempt For k = 2, we had ( C 0,C t,c 2 t,...,c t) ( i {0, 1,..., t}) [C i t leads to C (i+1) t in t steps, C 0 initial, C t accepting] Observation: The [ ] is an O( t) time and small-space computation, thus we can speed it up by a square root as well 13-a
32 Proof of speed-up theorem: The k = 3 case, first attempt For k = 2, we had ( C 0,C t,c 2 t,...,c t) ( i {0, 1,..., t}) [C i t leads to C (i+1) t in t steps, C 0 initial, C t accepting] Observation: The [ ] is an O( t) time and small-space computation, thus we can speed it up by a square root as well Straightforward way of doing this leads to: ( C 0,C t 2/3,C 2 t 2/3,...,C t )( i {0, 1,...,t 1/3 }) ( C i t 2/3 +t 1/3,C i t 2/3 +2 t 1/3,...,C (i+1) t 2/3)( j {1,..., t}) [C i t 2/3 +j t 1/3 leads to C i t 2/3 +(j+1) t 1/3 in t1/3 steps, C 0 initial, C t accepting] 13-b
33 k = 2 has one stage C 0 C t 1/2 C 2 t 1/2.... Ct
34 k = 2 has one stage C 0 C 0 k = 3 has two stages Ct 2/3 C t 1/2 C C 2/3 2/3 t t + t 1/3 C Ct t 1/2 C.... C t 2 t 2/3 C.... C 2/3 t + 2 t 1/3 2 t 2/3
35 Proof of speed-up theorem: Not quite enough for k = 3 The k = 3 sentence we gave uses four quantifiers, for only t 1/3 time (we want only three quantifier blocks) 15
36 Proof of speed-up theorem: Not quite enough for k = 3 The k = 3 sentence we gave uses four quantifiers, for only t 1/3 time (we want only three quantifier blocks) Idea: Take advantage of the computation s determinism only one possible configuration at any step 15-a
37 Proof of speed-up theorem: Not quite enough for k = 3 The k = 3 sentence we gave uses four quantifiers, for only t 1/3 time (we want only three quantifier blocks) Idea: Take advantage of the computation s determinism only one possible configuration at any step The acceptance condition for M(x) can be complemented: M(x) accepts iff ( C 0,C t,c 2 t,...,c t rejecting) ( i {1,..., t}) [C i t does not lead to C i t+ t in t steps] For all configuration sequences C 1,...,C t where C t is rejecting, there exists a configuration C i that does not lead to C i+1 15-b
38 The k = 3 case We can therefore rewrite the k = 3 case, from ( C 0,C t 2/3,C 2 t 2/3,...,C t accepting)( i {0, 1,...,t 1/3 }) ( C i t 2/3 +t 1/3,C i t 2/3 +2 t 1/3,...,C (i+1) t 2/3)( j {1,..., t}) [C i t 2/3 +j t 1/3 leads to C i t 2/3 +(j+1) t 1/3 in t1/3 steps] to: 16
39 The k = 3 case We can therefore rewrite the k = 3 case, from ( C 0,C t 2/3,C 2 t 2/3,...,C t accepting)( i {0, 1,...,t 1/3 }) ( C i t 2/3 +t 1/3,C i t 2/3 +2 t 1/3,...,C (i+1) t 2/3)( j {1,..., t}) [C i t 2/3 +j t 1/3 leads to C i t 2/3 +(j+1) t 1/3 in t1/3 steps] to: ( C 0,C t 2/3,C 2 t 2/3,...,C t accepting)( i {0, 1,...,t 1/3 }) ( C i t 2/3 +t 1/3,...,C (i+1) t 2/3 t 1/3,C C (i+1) t 2/3 (i+1) t 2/3) ( j {1,..., t}) [C i t 2/3 +j t 1/3 does not lead to C i t 2/3 +(j+1) t 1/3 in t1/3 steps] 16-a
40 The k = 3 case We can therefore rewrite the k = 3 case, from ( C 0,C t 2/3,C 2 t 2/3,...,C t accepting)( i {0, 1,...,t 1/3 }) ( C i t 2/3 +t 1/3,C i t 2/3 +2 t 1/3,...,C (i+1) t 2/3)( j {1,..., t}) [C i t 2/3 +j t 1/3 leads to C i t 2/3 +(j+1) t 1/3 in t1/3 steps] to: ( C 0,C t 2/3,C 2 t 2/3,...,C t accepting)( i {0, 1,...,t 1/3 }) ( C i t 2/3 +t 1/3,...,C (i+1) t 2/3 t 1/3,C C (i+1) t 2/3 (i+1) t 2/3) ( j {1,..., t}) [C i t 2/3 +j t 1/3 does not lead to C i t 2/3 +(j+1) t 1/3 in t1/3 steps] Voila! Three quantifier blocks. This is in Σ 3 TIME[t 1/3+o(1) ] (and similarly one can show it s in Π 3 TIME[t 1/3+o(1) ]) 16-b
41 This can be generalized... For arbitrary k 3, one simply guesses (existentially or universally) t 1/k configurations at each stage 17
42 This can be generalized... For arbitrary k 3, one simply guesses (existentially or universally) t 1/k configurations at each stage Inverting quantifiers means the number of alternations only increases by one for every stage ( ) ( ) ( ) 17-a
43 This can be generalized... For arbitrary k 3, one simply guesses (existentially or universally) t 1/k configurations at each stage Inverting quantifiers means the number of alternations only increases by one for every stage ( ) ( ) ( ) There are k 1 stages of guessing t 1/k configurations, then t 1/k time to deterministically verify configurations 17-b
44 Outline Preliminaries A Speed-Up Theorem A Slow-Down Lemma Lipton and Viglas n 2 Lower Bound Our Inductive Argument From n 1.66 to n
45 A Slow-Down Lemma (Trading Alternations For Time) Main Idea: The assumption NTIME[n] DTIME[n c ] allows one to remove alternations from a computation, with a small time increase 19
46 A Slow-Down Lemma (Trading Alternations For Time) Main Idea: The assumption NTIME[n] DTIME[n c ] allows one to remove alternations from a computation, with a small time increase Let t(n) n be a polynomial, c 1. Lemma: If NTIME[n] DTIME[n c ] then for all k 1, Σ k TIME[t] Σ k 1 TIME[t c ]. We prove the following, which will be very useful in our final proof. 19-a
47 A Slow-Down Lemma (Trading Alternations For Time) Main Idea: The assumption NTIME[n] DTIME[n c ] allows one to remove alternations from a computation, with a small time increase Let t(n) n be a polynomial, c 1. Lemma: If NTIME[n] DTIME[n c ] then for all k 1, Σ k TIME[t] Σ k 1 TIME[t c ]. We prove the following, which will be very useful in our final proof. Theorem: If Σ k TIME[n] Π k TIME[n c ] then Σ k+1 TIME[t] Σ k TIME[t c ]. 19-b
48 A Slow-Down Lemma: Proof Assume Σ k TIME[n] Π k TIME[n c ] Let M be a Σ k+1 machine running in time t 20
49 A Slow-Down Lemma: Proof Assume Σ k TIME[n] Π k TIME[n c ] Let M be a Σ k+1 machine running in time t Recall M(x) can be characterized by a first-order sentence: ( x 1, x 1 t( x ))( x 2, x 2 t( x )) (Qz, x k+1 t( x ))[P(x,x 1,x 2,...,x k+1 )] where P runs in time t( x ) 20-a
50 A Slow-Down Lemma: Proof Assume Σ k TIME[n] Π k TIME[n c ] Let M be a Σ k+1 machine running in time t Recall M(x) can be characterized by a first-order sentence: ( x 1, x 1 t( x ))( x 2, x 2 t( x )) (Qz, x k+1 t( x ))[P(x,x 1,x 2,...,x k+1 )] where P runs in time t( x ) Important Point: input to P is of O(t( x )) length, so P actually runs in linear time with respect to the length of its input 20-b
51 A Slow-Down Lemma: Proof Assume Σ k TIME[n] Π k TIME[n c ] Define R(x,x 1 ) := ( x 2, x 2 t( x )) (Qz, x k+1 t( x )) [P(x,x 1,x 2,...,x k+1 )] 21
52 A Slow-Down Lemma: Proof Assume Σ k TIME[n] Π k TIME[n c ] Define R(x,x 1 ) := ( x 2, x 2 t( x )) (Qz, x k+1 t( x )) [P(x,x 1,x 2,...,x k+1 )] So M(x) accepts iff ( x 1, x 1 t( x ))R(x,x 1 ) 21-a
53 A Slow-Down Lemma: Proof Assume Σ k TIME[n] Π k TIME[n c ] Define R(x,x 1 ) := ( x 2, x 2 t( x )) (Qz, x k+1 t( x )) [P(x,x 1,x 2,...,x k+1 )] So M(x) accepts iff ( x 1, x 1 t( x ))R(x,x 1 ) By definition, R recognized by a Π k machine in time t( x ), i.e. linear time ( x 1 = t( x )). 21-b
54 A Slow-Down Lemma: Proof Assume Σ k TIME[n] Π k TIME[n c ] Define R(x,x 1 ) := ( x 2, x 2 t( x )) (Qz, x k+1 t( x )) [P(x,x 1,x 2,...,x k+1 )] So M(x) accepts iff ( x 1, x 1 t( x ))R(x,x 1 ) By definition, R recognized by a Π k machine in time t( x ), i.e. linear time ( x 1 = t( x )). By assumption, there is R equivalent to R that starts with an, has k quantifier blocks, is in t( x ) c time 21-c
55 A Slow-Down Lemma: Proof Assume Σ k TIME[n] Π k TIME[n c ] Define R(x,x 1 ) := ( x 2, x 2 t( x )) (Qz, x k+1 t( x )) [P(x,x 1,x 2,...,x k+1 )] So M(x) accepts iff ( x 1, x 1 t( x ))R(x,x 1 ) By definition, R recognized by a Π k machine in time t( x ), i.e. linear time ( x 1 = t( x )). By assumption, there is R equivalent to R that starts with an, has k quantifier blocks, is in t( x ) c time M(x) accepts iff [( x 1, x 1 t( x ))R (x,x 1 )] Σ k TIME[t c ] 21-d
56 Outline Preliminaries A Speed-Up Theorem A Slow-Down Lemma Lipton and Viglas n 2 Lower Bound Our Inductive Argument From n 1.66 to n
57 Lipton and Viglas n 2 Lower Bound (Rephrased) Lemma: If NTIME[n] DTISP[n c,n o(1) ] for some c 1, then for all polynomials t(n) n, NTIME[t] DTISP[t c,t o(1) ] Theorem: NTIME[n] DTISP[n 2 ε,n o(1) ] 23
58 Lipton and Viglas n 2 Lower Bound (Rephrased) Lemma: If NTIME[n] DTISP[n c,n o(1) ] for some c 1, then for all polynomials t(n) n, NTIME[t] DTISP[t c,t o(1) ] Theorem: NTIME[n] DTISP[n 2 ε,n o(1) ] Proof: Assume NTIME[n] DTISP[n c,n o(1) ] (We will find a c that implies a contradiction) 23-a
59 Lipton and Viglas n 2 Lower Bound (Rephrased) Lemma: If NTIME[n] DTISP[n c,n o(1) ] for some c 1, then for all polynomials t(n) n, NTIME[t] DTISP[t c,t o(1) ] Theorem: NTIME[n] DTISP[n 2 ε,n o(1) ] Proof: Assume NTIME[n] DTISP[n c,n o(1) ] (We will find a c that implies a contradiction) Σ 2 TIME[n] NTIME[n c ], by slow-down theorem 23-b
60 Lipton and Viglas n 2 Lower Bound (Rephrased) Lemma: If NTIME[n] DTISP[n c,n o(1) ] for some c 1, then for all polynomials t(n) n, NTIME[t] DTISP[t c,t o(1) ] Theorem: NTIME[n] DTISP[n 2 ε,n o(1) ] Proof: Assume NTIME[n] DTISP[n c,n o(1) ] (We will find a c that implies a contradiction) Σ 2 TIME[n] NTIME[n c ], by slow-down theorem NTIME[n c ] DTISP[n c2,n o(1) ], by assumption and padding 23-c
61 Lipton and Viglas n 2 Lower Bound (Rephrased) Lemma: If NTIME[n] DTISP[n c,n o(1) ] for some c 1, then for all polynomials t(n) n, NTIME[t] DTISP[t c,t o(1) ] Theorem: NTIME[n] DTISP[n 2 ε,n o(1) ] Proof: Assume NTIME[n] DTISP[n c,n o(1) ] (We will find a c that implies a contradiction) Σ 2 TIME[n] NTIME[n c ], by slow-down theorem NTIME[n c ] DTISP[n c2,n o(1) ], by assumption and padding DTISP[n c2,n o(1) ] Π 2 TIME[n c2 /2 ], by speed-up theorem, so c < 2 contradicts the hierarchy theorem 23-d
62 Outline Preliminaries A Speed-Up Theorem A Slow-Down Lemma Lipton and Viglas n 2 Lower Bound Our Inductive Argument From n 1.66 to n
63 Viewing Lipton-Viglas as a Lemma (The Base Case for Our Induction) We deliberately presented Lipton-Viglas s result differently from the original argument. In this way, we get Lemma: NTIME[n] DTISP[n c,n o(1) ] implies Σ 2 TIME[n] Π 2 TIME[n c2 /2 ]. Note if c < 2 then c 2 /2 < c. 25
64 Viewing Lipton-Viglas as a Lemma (The Base Case for Our Induction) We deliberately presented Lipton-Viglas s result differently from the original argument. In this way, we get Lemma: NTIME[n] DTISP[n c,n o(1) ] implies Σ 2 TIME[n] Π 2 TIME[n c2 /2 ]. Note if c < 2 then c 2 /2 < c. Thus, we may not necessarily have a contradiction for larger c, but we can remove one alternation from Σ 3 with only n c2 /2 cost 25-a
65 Viewing Lipton-Viglas as a Lemma (The Base Case for Our Induction) We deliberately presented Lipton-Viglas s result differently from the original argument. In this way, we get Lemma: NTIME[n] DTISP[n c,n o(1) ] implies Σ 2 TIME[n] Π 2 TIME[n c2 /2 ]. Note if c < 2 then c 2 /2 < c. Thus, we may not necessarily have a contradiction for larger c, but we can remove one alternation from Σ 3 with only n c2 /2 cost Slow-down theorem implies Σ 3 TIME[n] Σ 2 TIME[n c2 /2 ] 25-b
66 The Start of the Induction: Σ 3 Assume NTIME[n] DTISP[n c,n o(1) ] and the lemma 26
67 The Start of the Induction: Σ 3 Assume NTIME[n] DTISP[n c,n o(1) ] and the lemma Σ 3 TIME[n] Σ 2 TIME[n c2 /2 ], by slow-down and lemma 26-a
68 The Start of the Induction: Σ 3 Assume NTIME[n] DTISP[n c,n o(1) ] and the lemma Σ 3 TIME[n] Σ 2 TIME[n c2 /2 ], by slow-down and lemma Σ 2 TIME[n c2 /2 ] NTIME[n c3 /2 ], by slow-down 26-b
69 The Start of the Induction: Σ 3 Assume NTIME[n] DTISP[n c,n o(1) ] and the lemma Σ 3 TIME[n] Σ 2 TIME[n c2 /2 ], by slow-down and lemma Σ 2 TIME[n c2 /2 ] NTIME[n c3 /2 ], by slow-down NTIME[n c3 /2 ] DTISP[n c4 /2,n o(1) ], by assumption and padding 26-c
70 The Start of the Induction: Σ 3 Assume NTIME[n] DTISP[n c,n o(1) ] and the lemma Σ 3 TIME[n] Σ 2 TIME[n c2 /2 ], by slow-down and lemma Σ 2 TIME[n c2 /2 ] NTIME[n c3 /2 ], by slow-down NTIME[n c3 /2 ] DTISP[n c4 /2,n o(1) ], by assumption and padding DTISP[n c4 /2,n o(1) ] Π 3 TIME[n c4 /6 ], by speed-up 26-d
71 The Start of the Induction: Σ 3 Assume NTIME[n] DTISP[n c,n o(1) ] and the lemma Σ 3 TIME[n] Σ 2 TIME[n c2 /2 ], by slow-down and lemma Σ 2 TIME[n c2 /2 ] NTIME[n c3 /2 ], by slow-down NTIME[n c3 /2 ] DTISP[n c4 /2,n o(1) ], by assumption and padding DTISP[n c4 /2,n o(1) ] Π 3 TIME[n c4 /6 ], by speed-up Observe: Now c < contradicts time hierarchy for Σ 3 and Π 3 But if c 4 6, then we obtain a new lemma : Σ 3 TIME[n] Π 3 TIME[n c4 /6 ] 26-e
72 Σ 4, Σ 5,... Assume NTIME[n] DTISP[n c,n o(1) ] and lemmas 27
73 Σ 4, Σ 5,... Assume NTIME[n] DTISP[n c,n o(1) ] and lemmas (Here we drop the TIME from Σ k TIME for tidiness) Σ 4 [n] Σ 3 [n c4 6 ] Σ 2 [n c4 6 c2 2 ] NTIME[n c4 6 c2 2 c ], but 27-a
74 Σ 4, Σ 5,... Assume NTIME[n] DTISP[n c,n o(1) ] and lemmas (Here we drop the TIME from Σ k TIME for tidiness) Σ 4 [n] Σ 3 [n c4 6 ] Σ 2 [n c4 6 c2 2 ] NTIME[n c4 6 c2 2 c ], but NTIME[n c4 6 c2 2 c ] DTISP[n c4 6 c2 2 c2,n o(1) ] Π 4 [n c8 /48 ] (c < implies contradiction) 27-b
75 Σ 4, Σ 5,... Assume NTIME[n] DTISP[n c,n o(1) ] and lemmas (Here we drop the TIME from Σ k TIME for tidiness) Σ 4 [n] Σ 3 [n c4 6 ] Σ 2 [n c4 6 c2 2 ] NTIME[n c4 6 c2 2 c ], but NTIME[n c4 6 c2 2 c ] DTISP[n c4 6 c2 2 c2,n o(1) ] Π 4 [n c8 /48 ] (c < implies contradiction) Σ 5 [n] Σ 4 [n c8 48] Σ 3 [n c ] Σ 2 [n c ], and this is in 27-c
76 Σ 4, Σ 5,... Assume NTIME[n] DTISP[n c,n o(1) ] and lemmas (Here we drop the TIME from Σ k TIME for tidiness) Σ 4 [n] Σ 3 [n c4 6 ] Σ 2 [n c4 6 c2 2 ] NTIME[n c4 6 c2 2 c ], but NTIME[n c4 6 c2 2 c ] DTISP[n c4 6 c2 2 c2,n o(1) ] Π 4 [n c8 /48 ] (c < implies contradiction) Σ 5 [n] Σ 4 [n c8 48] Σ 3 [n c ] Σ 2 [n c ], and this is in NTIME[n c ] DTISP[n c ,n o(1) ] Π 5 [n c ] (c < implies contradiction) 27-d
77 An intermediate lower bound, n Υ Assume NTIME[n] DTISP[n c,n o(1) ] Claim: The inductive process of the previous slide converges. The constant derived is Υ := lim k f(k), where f(k) := k 1 j=1 (1 + 1/j)1/2j. Note Υ
78 A Time-Space Tradeoff Corollary: For every c < 1.66 there is d > 0 such that SAT is not in DTISP[n c,n d ]. 29
79 Outline Preliminaries A Speed-Up Theorem A Slow-Down Lemma Lipton and Viglas n 2 Lower Bound Our Inductive Argument From n 1.66 to n
80 From n 1.66 to n DTISP[t,t o(1) ] Π k TISP[t 1/k+o(1) ] is an unconditional result 31
81 From n 1.66 to n DTISP[t,t o(1) ] Π k TISP[t 1/k+o(1) ] is an unconditional result All other derived class inclusions in the above proof actually depend on the assumption that NTIME[n] DTISP[n c,n o(1) ]. 31-a
82 From n 1.66 to n DTISP[t,t o(1) ] Π k TISP[t 1/k+o(1) ] is an unconditional result All other derived class inclusions in the above proof actually depend on the assumption that NTIME[n] DTISP[n c,n o(1) ]. We ll now show how such an assumption can get DTISP[n c,n o(1) ] Π k TISP[n c/(k+ε)+o(1) ] for some ε > 0. This will push the lower bound higher. 31-b
83 From n 1.66 to n DTISP[t,t o(1) ] Π k TISP[t 1/k+o(1) ] is an unconditional result All other derived class inclusions in the above proof actually depend on the assumption that NTIME[n] DTISP[n c,n o(1) ]. We ll now show how such an assumption can get DTISP[n c,n o(1) ] Π k TISP[n c/(k+ε)+o(1) ] for some ε > 0. This will push the lower bound higher. Lemma: Let c 2. Define d 1 := 2, d k := 1 + d k 1 c If NTIME[n 2/c ] DTISP[n 2,n o(1) ], then for all k, DTISP[n d k,n o(1) ] Π 2 TIME[n 1+o(1) ].. 31-c
84 From n 1.66 to n DTISP[t,t o(1) ] Π k TISP[t 1/k+o(1) ] is an unconditional result All other derived class inclusions in the above proof actually depend on the assumption that NTIME[n] DTISP[n c,n o(1) ]. We ll now show how such an assumption can get DTISP[n c,n o(1) ] Π k TISP[n c/(k+ε)+o(1) ] for some ε > 0. This will push the lower bound higher. Lemma: Let c 2. Define d 1 := 2, d k := 1 + d k 1 c If NTIME[n 2/c ] DTISP[n 2,n o(1) ], then for all k, DTISP[n d k,n o(1) ] Π 2 TIME[n 1+o(1) ]. For c < 2, {d k } is increasing for each k, a bit more of DTISP[n O(1),n o(1) ] is shown to be contained in Π 2 TIME[n 1+o(1) ]. 31-d
85 Proof of Lemma Lemma: Let c < 2. Define d 1 := 2, d k := 1 + d k 1 c If NTIME[n 2/c ] DTISP[n 2,n o(1) ], then for all k N, DTISP[n d k,n o(1) ] Π 2 TIME[n 1+o(1) ].. Induction on k. k = 1 case is trivial (speedup theorem). Suppose NTIME[n 2/c ] DTISP[n 2,n o(1) ] and DTISP[n d k,n o(1) ] Π 2 TIME[n 1+o(1) ]. 32
86 Proof of Lemma Lemma: Let c < 2. Define d 1 := 2, d k := 1 + d k 1 c If NTIME[n 2/c ] DTISP[n 2,n o(1) ], then for all k N, DTISP[n d k,n o(1) ] Π 2 TIME[n 1+o(1) ].. Induction on k. k = 1 case is trivial (speedup theorem). Suppose NTIME[n 2/c ] DTISP[n 2,n o(1) ] and DTISP[n d k,n o(1) ] Π 2 TIME[n 1+o(1) ]. Want: DTISP[n 1+d k/c,n o(1) ] Π 2 TIME[n 1+o(1) ]. 32-a
87 Proof of Lemma Lemma: Let c < 2. Define d 1 := 2, d k := 1 + d k 1 c If NTIME[n 2/c ] DTISP[n 2,n o(1) ], then for all k N, DTISP[n d k,n o(1) ] Π 2 TIME[n 1+o(1) ].. Induction on k. k = 1 case is trivial (speedup theorem). Suppose NTIME[n 2/c ] DTISP[n 2,n o(1) ] and DTISP[n d k,n o(1) ] Π 2 TIME[n 1+o(1) ]. Want: DTISP[n 1+d k/c,n o(1) ] Π 2 TIME[n 1+o(1) ]. By padding, the purple assumptions imply NTIME[n d k/c ] DTISP[n d k,n o(1) ] Π 2 TIME[n 1+o(1) ]. ( ) 32-b
88 Goal: DTISP[n 1+d k/c,n o(1) ] Π 2 TIME[n 1+o(1) ] Consider a Π 2 simulation of DTISP[n 1+d k/c,n o(1) ] with only O(n) bits (n 1 o(1) configurations) in the universal quantifier: 33
89 Goal: DTISP[n 1+d k/c,n o(1) ] Π 2 TIME[n 1+o(1) ] Consider a Π 2 simulation of DTISP[n 1+d k/c,n o(1) ] with only O(n) bits (n 1 o(1) configurations) in the universal quantifier: ( configurations C 1,...,C n 1 o(1) of M on x s.t. C n 1 o(1) is rejecting) ( i {1,...,n 1 o(1) 1})[C i does not lead to C i+1 in n d k/c+o(1) time] 33-a
90 Goal: DTISP[n 1+d k/c,n o(1) ] Π 2 TIME[n 1+o(1) ] Consider a Π 2 simulation of DTISP[n 1+d k/c,n o(1) ] with only O(n) bits (n 1 o(1) configurations) in the universal quantifier: ( configurations C 1,...,C n 1 o(1) of M on x s.t. C n 1 o(1) is rejecting) ( i {1,...,n 1 o(1) 1})[C i does not lead to C i+1 in n d k/c+o(1) time] Green part is an NTIME computation, input of length O(n), takes n d k/c+o(1) time 33-b
91 Goal: DTISP[n 1+d k/c,n o(1) ] Π 2 TIME[n 1+o(1) ] Consider a Π 2 simulation of DTISP[n 1+d k/c,n o(1) ] with only O(n) bits (n 1 o(1) configurations) in the universal quantifier: ( configurations C 1,...,C n 1 o(1) of M on x s.t. C n 1 o(1) is rejecting) ( i {1,...,n 1 o(1) 1})[C i does not lead to C i+1 in n d k/c+o(1) time] Green part is an NTIME computation, input of length O(n), takes n d k/c+o(1) time ( ) = Green can be replaced with Π 2 TIME[n 1+o(1) ] computation, i.e. 33-c
92 Goal: DTISP[n 1+d k/c,n o(1) ] Π 2 TIME[n 1+o(1) ] Consider a Π 2 simulation of DTISP[n 1+d k/c,n o(1) ] with only O(n) bits (n 1 o(1) configurations) in the universal quantifier: ( configurations C 1,...,C n 1 o(1) of M on x s.t. C n 1 o(1) is rejecting) ( i {1,...,n 1 o(1) 1})[C i does not lead to C i+1 in n d k/c+o(1) time] Green part is an NTIME computation, input of length O(n), takes n d k/c+o(1) time ( ) = Green can be replaced with Π 2 TIME[n 1+o(1) ] computation, i.e. ( configurations C 1,...,C n 1 o(1) of M on x s.t. C n 1 o(1) is rejecting) ( y, y = c x 1+o(1) ) ( z, z = c z 1+o(1) )[R(C 1,...,C n 1 o(1),x,y,z)], for some deterministic linear time relation R and constant c > d
93 Goal: DTISP[n 1+d k/c,n o(1) ] Π 2 TIME[n 1+o(1) ] Consider a Π 2 simulation of DTISP[n 1+d k/c,n o(1) ] with only O(n) bits (n 1 o(1) configurations) in the universal quantifier: ( configurations C 1,...,C n 1 o(1) of M on x s.t. C n 1 o(1) is rejecting) ( i {1,...,n 1 o(1) 1})[C i does not lead to C i+1 in n d k/c+o(1) time] Green part is an NTIME computation, input of length O(n), takes n d k/c+o(1) time ( ) = Green can be replaced with Π 2 TIME[n 1+o(1) ] computation, i.e. ( configurations C 1,...,C n 1 o(1) of M on x s.t. C n 1 o(1) is rejecting) ( y, y = c x 1+o(1) ) ( z, z = c z 1+o(1) )[R(C 1,...,C n 1 o(1),x,y,z)], for some deterministic linear time relation R and constant c > 0. Therefore, DTISP[n d k+1,n o(1) ] Π 2 TIME[n 1+o(1) ]. 33-e
94 New Lemma Gives Better Bound Corollary 1 Let c (1, 2). If NTIME[n 2/c ] DTISP[n 2,n o(1) ] then c for all ε > 0 such that ε c 1 1, DTISP[n c c 1 ε,n o(1) ] Π 2 TIME[n 1+o(1) ]. 34
95 New Lemma Gives Better Bound Corollary 1 Let c (1, 2). If NTIME[n 2/c ] DTISP[n 2,n o(1) ] then c for all ε > 0 such that ε c 1 1, DTISP[n c c 1 ε,n o(1) ] Π 2 TIME[n 1+o(1) ]. Proof. Recall d 2 = 2, d k = 1 + d k 1 /c. {d k } is monotone non-decreasing for c < 2; converges to d = 1 + d c = d = c/(c 1). (Note c = 2 implies d = 2) It follows that for all ε, there s a K such that d K c c 1 ε. 34-a
96 Now: Apply inductive method from n 1.66 lower bound the base case now resembles Fortnow-Van Melkebeek s n φ lower bound If NTIME[n] DTISP[n c,n o(1) ], Corollary 1 implies ( Σ 2 TIME[n] DTISP[n c2,n o(1) ] DTISP[ n c c2 c 1 Π 2 TIME[n c (c 1)+o(1) ]. φ(φ 1) = 1 ) c/(c 1)+o(1),n o(1) ] 35
97 Now: Apply inductive method from n 1.66 lower bound the base case now resembles Fortnow-Van Melkebeek s n φ lower bound If NTIME[n] DTISP[n c,n o(1) ], Corollary 1 implies ( Σ 2 TIME[n] DTISP[n c2,n o(1) ] DTISP[ n c Inducting as before, we get c2 c 1 Π 2 TIME[n c (c 1)+o(1) ]. φ(φ 1) = 1 ) c/(c 1)+o(1),n o(1) ] 35-a
98 Now: Apply inductive method from n 1.66 lower bound the base case now resembles Fortnow-Van Melkebeek s n φ lower bound If NTIME[n] DTISP[n c,n o(1) ], Corollary 1 implies ( Σ 2 TIME[n] DTISP[n c2,n o(1) ] DTISP[ n c Inducting as before, we get c2 c 1 Π 2 TIME[n c (c 1)+o(1) ]. φ(φ 1) = 1 ) c/(c 1)+o(1),n o(1) ] Σ 3 [n] Σ 2 [n c (c 1) ] DTISP[n c3 (c 1),n o(1) ] Π 3 [n c3 (c 1) 3 ], then 35-b
99 Now: Apply inductive method from n 1.66 lower bound the base case now resembles Fortnow-Van Melkebeek s n φ lower bound If NTIME[n] DTISP[n c,n o(1) ], Corollary 1 implies ( Σ 2 TIME[n] DTISP[n c2,n o(1) ] DTISP[ n c Inducting as before, we get c2 c 1 Π 2 TIME[n c (c 1)+o(1) ]. φ(φ 1) = 1 ) c/(c 1)+o(1),n o(1) ] Σ 3 [n] Σ 2 [n c (c 1) ] DTISP[n c3 (c 1),n o(1) ] Π 3 [n c3 (c 1) 3 ], then Σ 4 [n] Σ 3 [n c3 (c 1) 3 ] Σ 2 [n c4 (c 1) 2 3 ] DTISP[n c6 (c 1) 2 3,n o(1) ] Π 4 [n c6 (c 1) 2 12 ], etc. 35-c
100 Claim: The exponent e k derived for Σ k TIME[n] Π k TIME[n e k ] is e k = c 3 2k 3 (c 1) 2k 3 k (3 2k 4 4 2k 5 5 2k 6 (k 1)). 36
101 Simplifying, e k = ( c 3 2k 3 (c 1) 2k 3 = k (3 2k 4 4 2k 5 5 2k 6 (k 1)) thus e k < 1 Finishing up c 3 (c 1) k 2 k+3 ( (k 1) 2 k+3 ) c 3 (c 1) k 2 k+3 ( (k 1) 2 k+3 ) < 1 ) 2 k 3 37
102 Simplifying, e k = ( c 3 2k 3 (c 1) 2k 3 = k (3 2k 4 4 2k 5 5 2k 6 (k 1)) thus e k < 1 Finishing up c 3 (c 1) k 2 k+3 ( (k 1) 2 k+3 ) c 3 (c 1) k 2 k+3 ( (k 1) 2 k+3 ) < 1 Define f(k) = k 2 k+3 ( (k 1) 2 k+3 ) ) 2 k 3 37-a
103 Simplifying, e k = ( c 3 2k 3 (c 1) 2k 3 = k (3 2k 4 4 2k 5 5 2k 6 (k 1)) thus e k < 1 Finishing up c 3 (c 1) k 2 k+3 ( (k 1) 2 k+3 ) c 3 (c 1) k 2 k+3 ( (k 1) 2 k+3 ) < 1 Define f(k) = k 2 k+3 ( (k 1) 2 k+3 ) f(k) as k ) 2 k 3 37-b
104 Simplifying, e k = ( c 3 2k 3 (c 1) 2k 3 = k (3 2k 4 4 2k 5 5 2k 6 (k 1)) thus e k < 1 Finishing up c 3 (c 1) k 2 k+3 ( (k 1) 2 k+3 ) c 3 (c 1) k 2 k+3 ( (k 1) 2 k+3 ) < 1 Define f(k) = k 2 k+3 ( (k 1) 2 k+3 ) f(k) as k Above task reduces to finding positive root of c 3 (c 1) = ) 2 k 3 37-c
105 Simplifying, e k = ( c 3 2k 3 (c 1) 2k 3 = k (3 2k 4 4 2k 5 5 2k 6 (k 1)) thus e k < 1 Finishing up c 3 (c 1) k 2 k+3 ( (k 1) 2 k+3 ) c 3 (c 1) k 2 k+3 ( (k 1) 2 k+3 ) < 1 Define f(k) = k 2 k+3 ( (k 1) 2 k+3 ) f(k) as k Above task reduces to finding positive root of c 3 (c 1) = ) 2 k 3 = c > yields a contradiction. 37-d
106 The above inductive method can be applied to improve several existing lower bound arguments. Time lower bounds for SAT on off-line one-tape machines Time-space tradeoffs for nondeterminism/co-nondeterminism in RAM model Etc. See the paper! 38
Time-Space Tradeoffs for Counting NP Solutions Modulo Integers
Time-Space Tradeoffs for Counting NP Solutions Modulo Integers Ryan Williams Carnegie Mellon University 0-0 Introduction Recent progress in understanding the time complexity of hard problems in the space-bounded
More informationTime-Space Tradeoffs for SAT
Lecture 8 Time-Space Tradeoffs for SAT April 22, 2004 Lecturer: Paul Beame Notes: Definition 8.1. TIMESPACE(T (n), S(n)) = {L {0, 1} offline, multitape TM M such that L = L(M) and M uses time O(T (n))
More informationAlgorithms as Lower Bounds
Algorithms as Lower Bounds Lecture 3 Part 1: Solving QBF and NC1 Lower Bounds (Joint work with Rahul Santhanam, U. Edinburgh) Part 2: Time-Space Tradeoffs for SAT Ryan Williams Stanford University Quantified
More informationThe Polynomial Hierarchy
The Polynomial Hierarchy Slides based on S.Aurora, B.Barak. Complexity Theory: A Modern Approach. Ahto Buldas Ahto.Buldas@ut.ee Motivation..synthesizing circuits is exceedingly difficulty. It is even
More informationTime-Space Lower Bounds for the Polynomial-Time Hierarchy on Randomized Machines
Time-Space Lower Bounds for the Polynomial-Time Hierarchy on Randomized Machines Scott Diehl Dieter van Melkebeek University of Wisconsin, Madison, WI 53706, USA May 19, 2006 Abstract We establish the
More informationAutomated Proofs of Time Lower Bounds
Automated Proofs of Time Lower Bounds R. Ryan Williams Carnegie Mellon University Abstract A fertile area of recent research has demonstrated concrete polynomial time lower bounds for solving natural hard
More informationAlternation Trading Proofs and Their Limitations
and Their Limitations Mathematical Foundations of Computer Science (MFCS) IST Austria, Klosterneuburg August 27, 2013 NP and Satisfiability Lower bounds Fundamental problems for computer science include
More informationLecture 9: Polynomial-Time Hierarchy, Time-Space Tradeoffs
CSE 531: Computational Complexity I Winter 2016 Lecture 9: Polynomial-Time Hierarchy, Time-Space Tradeoffs Feb 3, 2016 Lecturer: Paul Beame Scribe: Paul Beame 1 The Polynomial-Time Hierarchy Last time
More informationPolynomial Hierarchy
Polynomial Hierarchy A polynomial-bounded version of Kleene s Arithmetic Hierarchy becomes trivial if P = NP. Karp, 1972 Computational Complexity, by Fu Yuxi Polynomial Hierarchy 1 / 44 Larry Stockmeyer
More informationMTAT Complexity Theory October 20th-21st, Lecture 7
MTAT.07.004 Complexity Theory October 20th-21st, 2011 Lecturer: Peeter Laud Lecture 7 Scribe(s): Riivo Talviste Polynomial hierarchy 1 Turing reducibility From the algorithmics course, we know the notion
More informationLecture 3: Reductions and Completeness
CS 710: Complexity Theory 9/13/2011 Lecture 3: Reductions and Completeness Instructor: Dieter van Melkebeek Scribe: Brian Nixon Last lecture we introduced the notion of a universal Turing machine for deterministic
More informationU.C. Berkeley CS278: Computational Complexity Professor Luca Trevisan August 30, Notes for Lecture 1
U.C. Berkeley CS278: Computational Complexity Handout N1 Professor Luca Trevisan August 30, 2004 Notes for Lecture 1 This course assumes CS170, or equivalent, as a prerequisite. We will assume that the
More informationNon-Linear Time Lower Bound for (Succinct) Quantified Boolean Formulas
Electronic Colloquium on Computational Complexity, Report No. 76 (2008) Non-Linear Time Lower Bound for (Succinct) Quantified Boolean Formulas Ryan Williams Carnegie Mellon University Abstract We prove
More informationAmplifying Circuit Lower Bounds Against Polynomial Time, With Applications
Amplifying Circuit Lower Bounds Against Polynomial Time, With Applications Richard J. Lipton College of Computing Georgia Institute of Technology Atlanta, GA rjl@cc.gatech.edu Ryan Williams Computer Science
More informationNon-Linear Time Lower Bound for (Succinct) Quantified Boolean Formulas
Non-Linear Time Lower Bound for (Succinct) Quantified Boolean Formulas R. Ryan Williams Carnegie Mellon University Abstract. We give a reduction from arbitrary languages in alternating time t(n) to quantified
More informationLecture 6: Oracle TMs, Diagonalization Limits, Space Complexity
CSE 531: Computational Complexity I Winter 2016 Lecture 6: Oracle TMs, Diagonalization Limits, Space Complexity January 22, 2016 Lecturer: Paul Beame Scribe: Paul Beame Diagonalization enabled us to separate
More informationNotes on Computer Theory Last updated: November, Circuits
Notes on Computer Theory Last updated: November, 2015 Circuits Notes by Jonathan Katz, lightly edited by Dov Gordon. 1 Circuits Boolean circuits offer an alternate model of computation: a non-uniform one
More information: On the P vs. BPP problem. 30/12/2016 Lecture 12
03684155: On the P vs. BPP problem. 30/12/2016 Lecture 12 Time Hierarchy Theorems Amnon Ta-Shma and Dean Doron 1 Diagonalization arguments Throughout this lecture, for a TM M, we denote M t to be the machine
More informationReview of Basic Computational Complexity
Lecture 1 Review of Basic Computational Complexity March 30, 2004 Lecturer: Paul Beame Notes: Daniel Lowd 1.1 Preliminaries 1.1.1 Texts There is no one textbook that covers everything in this course. Some
More informationLecture 16: Time Complexity and P vs NP
6.045 Lecture 16: Time Complexity and P vs NP 1 Time-Bounded Complexity Classes Definition: TIME(t(n)) = { L there is a Turing machine M with time complexity O(t(n)) so that L = L(M) } = { L L is a language
More informationLecture 8: Alternatation. 1 Alternate Characterizations of the Polynomial Hierarchy
CS 710: Complexity Theory 10/4/2011 Lecture 8: Alternatation Instructor: Dieter van Melkebeek Scribe: Sachin Ravi In this lecture, we continue with our discussion of the polynomial hierarchy complexity
More information2 P vs. NP and Diagonalization
2 P vs NP and Diagonalization CS 6810 Theory of Computing, Fall 2012 Instructor: David Steurer (sc2392) Date: 08/28/2012 In this lecture, we cover the following topics: 1 3SAT is NP hard; 2 Time hierarchies;
More informationFinish K-Complexity, Start Time Complexity
6.045 Finish K-Complexity, Start Time Complexity 1 Kolmogorov Complexity Definition: The shortest description of x, denoted as d(x), is the lexicographically shortest string such that M(w) halts
More informationLecture 7: The Polynomial-Time Hierarchy. 1 Nondeterministic Space is Closed under Complement
CS 710: Complexity Theory 9/29/2011 Lecture 7: The Polynomial-Time Hierarchy Instructor: Dieter van Melkebeek Scribe: Xi Wu In this lecture we first finish the discussion of space-bounded nondeterminism
More informationA Survey of Lower Bounds for Satisfiability and Related Problems
A Survey of Lower Bounds for Satisfiability and Related Problems Dieter van Melkebeek University of Wisconsin-Madison September 29, 2007 Abstract Ever since the fundamental work of Cook from 1971, satisfiability
More informationA Survey of Lower Bounds for Satisfiability and Related Problems
Foundations and Trends R in Theoretical Computer Science Vol. 2, No. 3 (2006) 197 303 c 2007 D. van Melkebeek DOI: 10.1561/0400000012 A Survey of Lower Bounds for Satisfiability and Related Problems Dieter
More informationCS154, Lecture 15: Cook-Levin Theorem SAT, 3SAT
CS154, Lecture 15: Cook-Levin Theorem SAT, 3SAT Definition: A language B is NP-complete if: 1. B NP 2. Every A in NP is poly-time reducible to B That is, A P B When this is true, we say B is NP-hard On
More informationNP, polynomial-time mapping reductions, and NP-completeness
NP, polynomial-time mapping reductions, and NP-completeness In the previous lecture we discussed deterministic time complexity, along with the time-hierarchy theorem, and introduced two complexity classes:
More informationCSC 5170: Theory of Computational Complexity Lecture 9 The Chinese University of Hong Kong 15 March 2010
CSC 5170: Theory of Computational Complexity Lecture 9 The Chinese University of Hong Kong 15 March 2010 We now embark on a study of computational classes that are more general than NP. As these classes
More informationLecture 17. In this lecture, we will continue our discussion on randomization.
ITCS:CCT09 : Computational Complexity Theory May 11, 2009 Lecturer: Jayalal Sarma M.N. Lecture 17 Scribe: Hao Song In this lecture, we will continue our discussion on randomization. 1 BPP and the Polynomial
More informationLecture 17: Cook-Levin Theorem, NP-Complete Problems
6.045 Lecture 17: Cook-Levin Theorem, NP-Complete Problems 1 Is SAT solvable in O(n) time on a multitape TM? Logic circuits of 6n gates for SAT? If yes, then not only is P=NP, but there would be a dream
More informationAlgorithms and Resource Requirements for Fundamental Problems
Algorithms and Resource Requirements for Fundamental Problems R. Ryan Williams August 2007 CMU-CS-07-147 School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Thesis Committee: Manuel
More informationLIMITS ON ALTERNATION-TRADING PROOFS FOR TIME-SPACE LOWER BOUNDS
LIMITS ON ALTERNATION-TRADING PROOFS FOR TIME-SPACE LOWER BOUNDS Samuel R. Buss and Ryan Williams Abstract. This paper characterizes alternation-trading based proofs that the Boolean satisfiability problem
More informationDRAFT. Diagonalization. Chapter 4
Chapter 4 Diagonalization..the relativized P =?NP question has a positive answer for some oracles and a negative answer for other oracles. We feel that this is further evidence of the difficulty of the
More informationLimits on Alternation-Trading Proofs for Time-Space Lower Bounds
Limits on Alternation-Trading Proofs for Time-Space Lower Bounds Samuel R. Buss Department of Mathematics University of California, San Diego La Jolla, CA sbuss@math.ucsd.edu Ryan Williams Computer Science
More informationComplete problems for classes in PH, The Polynomial-Time Hierarchy (PH) oracle is like a subroutine, or function in
Oracle Turing Machines Nondeterministic OTM defined in the same way (transition relation, rather than function) oracle is like a subroutine, or function in your favorite PL but each call counts as single
More informationLecture 18: PCP Theorem and Hardness of Approximation I
Lecture 18: and Hardness of Approximation I Arijit Bishnu 26.04.2010 Outline 1 Introduction to Approximation Algorithm 2 Outline 1 Introduction to Approximation Algorithm 2 Approximation Algorithm Approximation
More informationLecture 2 (Notes) 1. The book Computational Complexity: A Modern Approach by Sanjeev Arora and Boaz Barak;
Topics in Theoretical Computer Science February 29, 2016 Lecturer: Ola Svensson Lecture 2 (Notes) Scribes: Ola Svensson Disclaimer: These notes were written for the lecturer only and may contain inconsistent
More informationThe Cook-Levin Theorem
An Exposition Sandip Sinha Anamay Chaturvedi Indian Institute of Science, Bangalore 14th November 14 Introduction Deciding a Language Let L {0, 1} be a language, and let M be a Turing machine. We say M
More informationComputational Complexity: A Modern Approach. Draft of a book: Dated January 2007 Comments welcome!
i Computational Complexity: A Modern Approach Draft of a book: Dated January 2007 Comments welcome! Sanjeev Arora and Boaz Barak Princeton University complexitybook@gmail.com Not to be reproduced or distributed
More informationEasiness Amplification and Circuit Lower Bounds. Cody Murray Ryan Williams
Easiness Amplification and Circuit Lower Bounds Cody Murray Ryan Williams MIT MIT Motivation We want to show that P SIZE O n. Problem: It is currently open even whether O n TIME 2 SAT SIZE 10n!!! The best
More informationCSE 105 THEORY OF COMPUTATION
CSE 105 THEORY OF COMPUTATION Fall 2016 http://cseweb.ucsd.edu/classes/fa16/cse105-abc/ Logistics HW7 due tonight Thursday's class: REVIEW Final exam on Thursday Dec 8, 8am-11am, LEDDN AUD Note card allowed
More informationLimits of Feasibility. Example. Complexity Relationships among Models. 1. Complexity Relationships among Models
Limits of Feasibility Wolfgang Schreiner Wolfgang.Schreiner@risc.jku.at Research Institute for Symbolic Computation (RISC) Johannes Kepler University, Linz, Austria http://www.risc.jku.at 1. Complexity
More informationExponential time vs probabilistic polynomial time
Exponential time vs probabilistic polynomial time Sylvain Perifel (LIAFA, Paris) Dagstuhl January 10, 2012 Introduction Probabilistic algorithms: can toss a coin polynomial time (worst case) probability
More informationPromise Hierarchies. Lance Fortnow Rahul Santhanam Luca Trevisan
Promise Hierarchies Lance Fortnow Rahul Santhanam Luca Trevisan Abstract We show that for any constant a, ZPP/b(n) strictly contains ZPTIME(n a )/b(n) for some b(n) = O(log n log log n). Our techniques
More informationP is the class of problems for which there are algorithms that solve the problem in time O(n k ) for some constant k.
Complexity Theory Problems are divided into complexity classes. Informally: So far in this course, almost all algorithms had polynomial running time, i.e., on inputs of size n, worst-case running time
More informationUnprovability of circuit upper bounds in Cook s theory PV
Unprovability of circuit upper bounds in Cook s theory PV Igor Carboni Oliveira Faculty of Mathematics and Physics, Charles University in Prague. Based on joint work with Jan Krajíček (Prague). [Dagstuhl
More informationNotes for Lecture 3... x 4
Stanford University CS254: Computational Complexity Notes 3 Luca Trevisan January 14, 2014 Notes for Lecture 3 In this lecture we introduce the computational model of boolean circuits and prove that polynomial
More informationQuestions Pool. Amnon Ta-Shma and Dean Doron. January 2, Make sure you know how to solve. Do not submit.
Questions Pool Amnon Ta-Shma and Dean Doron January 2, 2017 General guidelines The questions fall into several categories: (Know). (Mandatory). (Bonus). Make sure you know how to solve. Do not submit.
More informationINAPPROX APPROX PTAS. FPTAS Knapsack P
CMPSCI 61: Recall From Last Time Lecture 22 Clique TSP INAPPROX exists P approx alg for no ε < 1 VertexCover MAX SAT APPROX TSP some but not all ε< 1 PTAS all ε < 1 ETSP FPTAS Knapsack P poly in n, 1/ε
More informationFORMAL LANGUAGES, AUTOMATA AND COMPUTABILITY
15-453 FORMAL LANGUAGES, AUTOMATA AND COMPUTABILITY THURSDAY APRIL 3 REVIEW for Midterm TUESDAY April 8 Definition: A Turing Machine is a 7-tuple T = (Q, Σ, Γ, δ, q, q accept, q reject ), where: Q is a
More informationLecture 3: Nondeterminism, NP, and NP-completeness
CSE 531: Computational Complexity I Winter 2016 Lecture 3: Nondeterminism, NP, and NP-completeness January 13, 2016 Lecturer: Paul Beame Scribe: Paul Beame 1 Nondeterminism and NP Recall the definition
More informationNondeterministic Circuit Lower Bounds from Mildly Derandomizing Arthur-Merlin Games
Nondeterministic Circuit Lower Bounds from Mildly Derandomizing Arthur-Merlin Games Barış Aydınlıo glu Department of Computer Sciences University of Wisconsin Madison, WI 53706, USA baris@cs.wisc.edu Dieter
More informationPolynomial Time Computation. Topics in Logic and Complexity Handout 2. Nondeterministic Polynomial Time. Succinct Certificates.
1 2 Topics in Logic and Complexity Handout 2 Anuj Dawar MPhil Advanced Computer Science, Lent 2010 Polynomial Time Computation P = TIME(n k ) k=1 The class of languages decidable in polynomial time. The
More informationDefinition: Alternating time and space Game Semantics: State of machine determines who
CMPSCI 601: Recall From Last Time Lecture 3 Definition: Alternating time and space Game Semantics: State of machine determines who controls, White wants it to accept, Black wants it to reject. White wins
More informationLimitations of Efficient Reducibility to the Kolmogorov Random Strings
Limitations of Efficient Reducibility to the Kolmogorov Random Strings John M. HITCHCOCK 1 Department of Computer Science, University of Wyoming Abstract. We show the following results for polynomial-time
More informationTime Complexity. CS60001: Foundations of Computing Science
Time Complexity CS60001: Foundations of Computing Science Professor, Dept. of Computer Sc. & Engg., Measuring Complexity Definition Let M be a deterministic Turing machine that halts on all inputs. The
More informationComplexity Theory VU , SS The Polynomial Hierarchy. Reinhard Pichler
Complexity Theory Complexity Theory VU 181.142, SS 2018 6. The Polynomial Hierarchy Reinhard Pichler Institut für Informationssysteme Arbeitsbereich DBAI Technische Universität Wien 15 May, 2018 Reinhard
More informationOutline. Complexity Theory EXACT TSP. The Class DP. Definition. Problem EXACT TSP. Complexity of EXACT TSP. Proposition VU 181.
Complexity Theory Complexity Theory Outline Complexity Theory VU 181.142, SS 2018 6. The Polynomial Hierarchy Reinhard Pichler Institut für Informationssysteme Arbeitsbereich DBAI Technische Universität
More informationVerifying whether One-Tape Turing Machines Run in Linear Time
Electronic Colloquium on Computational Complexity, Report No. 36 (2015) Verifying whether One-Tape Turing Machines Run in Linear Time David Gajser IMFM, Jadranska 19, 1000 Ljubljana, Slovenija david.gajser@fmf.uni-lj.si
More informationLecture 21: Algebraic Computation Models
princeton university cos 522: computational complexity Lecture 21: Algebraic Computation Models Lecturer: Sanjeev Arora Scribe:Loukas Georgiadis We think of numerical algorithms root-finding, gaussian
More informationTime Complexity. Definition. Let t : n n be a function. NTIME(t(n)) = {L L is a language decidable by a O(t(n)) deterministic TM}
Time Complexity Definition Let t : n n be a function. TIME(t(n)) = {L L is a language decidable by a O(t(n)) deterministic TM} NTIME(t(n)) = {L L is a language decidable by a O(t(n)) non-deterministic
More informationLecture 2: Tape reduction and Time Hierarchy
Computational Complexity Theory, Fall 2010 August 27 Lecture 2: Tape reduction and Time Hierarchy Lecturer: Peter Bro Miltersen Scribe: Andreas Hummelshøj Jakobsen Tape reduction Theorem 1 (k-tapes 2-tape
More informationLecture 21: Space Complexity (The Final Exam Frontier?)
6.045 Lecture 21: Space Complexity (The Final Exam Frontier?) 1 conp NP MIN-FORMULA conp P NP FIRST-SAT TAUT P FACTORING SAT NP NP NP 2 VOTE VOTE VOTE For your favorite course on automata and complexity
More informationComplexity (Pre Lecture)
Complexity (Pre Lecture) Dr. Neil T. Dantam CSCI-561, Colorado School of Mines Fall 2018 Dantam (Mines CSCI-561) Complexity (Pre Lecture) Fall 2018 1 / 70 Why? What can we always compute efficiently? What
More informationSpace and Nondeterminism
CS 221 Computational Complexity, Lecture 5 Feb 6, 2018 Space and Nondeterminism Instructor: Madhu Sudan 1 Scribe: Yong Wook Kwon Topic Overview Today we ll talk about space and non-determinism. For some
More informationAlgorithms & Complexity II Avarikioti Zeta
Algorithms & Complexity II Avarikioti Zeta March 17, 2014 Alternating Computation Alternation: generalizes non-determinism, where each state is either existential or universal : Old: existential states
More informationCSCI 1590 Intro to Computational Complexity
CSCI 1590 Intro to Computational Complexity Space Complexity John E. Savage Brown University February 11, 2008 John E. Savage (Brown University) CSCI 1590 Intro to Computational Complexity February 11,
More informationTheory of Computation
Theory of Computation Dr. Sarmad Abbasi Dr. Sarmad Abbasi () Theory of Computation 1 / 33 Lecture 20: Overview Incompressible strings Minimal Length Descriptions Descriptive Complexity Dr. Sarmad Abbasi
More informationCS154, Lecture 17: conp, Oracles again, Space Complexity
CS154, Lecture 17: conp, Oracles again, Space Complexity Definition: conp = { L L NP } What does a conp computation look like? In NP algorithms, we can use a guess instruction in pseudocode: Guess string
More informationTIME COMPLEXITY AND POLYNOMIAL TIME; NON DETERMINISTIC TURING MACHINES AND NP. THURSDAY Mar 20
TIME COMPLEXITY AND POLYNOMIAL TIME; NON DETERMINISTIC TURING MACHINES AND NP THURSDAY Mar 20 COMPLEXITY THEORY Studies what can and can t be computed under limited resources such as time, space, etc Today:
More informationOutline. Complexity Theory. Example. Sketch of a log-space TM for palindromes. Log-space computations. Example VU , SS 2018
Complexity Theory Complexity Theory Outline Complexity Theory VU 181.142, SS 2018 3. Logarithmic Space Reinhard Pichler Institute of Logic and Computation DBAI Group TU Wien 3. Logarithmic Space 3.1 Computational
More informationComputational complexity
COMS11700 Computational complexity Department of Computer Science, University of Bristol Bristol, UK 2 May 2014 COMS11700: Computational complexity Slide 1/23 Introduction If we can prove that a language
More informationTheory of Computation
Theory of Computation Dr. Sarmad Abbasi Dr. Sarmad Abbasi () Theory of Computation 1 / 38 Lecture 21: Overview Big-Oh notation. Little-o notation. Time Complexity Classes Non-deterministic TMs The Class
More informationChapter 7: Time Complexity
Chapter 7: Time Complexity 1 Time complexity Let M be a deterministic Turing machine that halts on all inputs. The running time or time complexity of M is the function f: N N, where f(n) is the maximum
More informationNotes for Lecture 3... x 4
Stanford University CS254: Computational Complexity Notes 3 Luca Trevisan January 18, 2012 Notes for Lecture 3 In this lecture we introduce the computational model of boolean circuits and prove that polynomial
More informationCircuit Lower Bounds for Nondeterministic Quasi-Polytime: An Easy Witness Lemma for NP and NQP
Electronic Colloquium on Computational Complexity, Report No. 188 (2017) Circuit Lower Bounds for Nondeterministic Quasi-Polytime: An Easy Witness Lemma for NP and NQP Cody D. Murray MIT R. Ryan Williams
More informationEssential facts about NP-completeness:
CMPSCI611: NP Completeness Lecture 17 Essential facts about NP-completeness: Any NP-complete problem can be solved by a simple, but exponentially slow algorithm. We don t have polynomial-time solutions
More informationDRAFT. Algebraic computation models. Chapter 14
Chapter 14 Algebraic computation models Somewhat rough We think of numerical algorithms root-finding, gaussian elimination etc. as operating over R or C, even though the underlying representation of the
More informationLecture 3 (Notes) 1. The book Computational Complexity: A Modern Approach by Sanjeev Arora and Boaz Barak;
Topics in Theoretical Computer Science March 7, 2016 Lecturer: Ola Svensson Lecture 3 (Notes) Scribes: Ola Svensson Disclaimer: These notes were written for the lecturer only and may contain inconsistent
More informationLecture 4. 1 Circuit Complexity. Notes on Complexity Theory: Fall 2005 Last updated: September, Jonathan Katz
Notes on Complexity Theory: Fall 2005 Last updated: September, 2005 Jonathan Katz Lecture 4 1 Circuit Complexity Circuits are directed, acyclic graphs where nodes are called gates and edges are called
More informationLecture 22: PSPACE
6.045 Lecture 22: PSPACE 1 VOTE VOTE VOTE For your favorite course on automata and complexity Please complete the online subject evaluation for 6.045 2 Final Exam Information Who: You On What: Everything
More informationLecture 23: Alternation vs. Counting
CS 710: Complexity Theory 4/13/010 Lecture 3: Alternation vs. Counting Instructor: Dieter van Melkebeek Scribe: Jeff Kinne & Mushfeq Khan We introduced counting complexity classes in the previous lecture
More informationNotes for Lecture Notes 2
Stanford University CS254: Computational Complexity Notes 2 Luca Trevisan January 11, 2012 Notes for Lecture Notes 2 In this lecture we define NP, we state the P versus NP problem, we prove that its formulation
More informationCSC 5170: Theory of Computational Complexity Lecture 5 The Chinese University of Hong Kong 8 February 2010
CSC 5170: Theory of Computational Complexity Lecture 5 The Chinese University of Hong Kong 8 February 2010 So far our notion of realistic computation has been completely deterministic: The Turing Machine
More informationLecture 12: Randomness Continued
CS 710: Complexity Theory 2/25/2010 Lecture 12: Randomness Continued Instructor: Dieter van Melkebeek Scribe: Beth Skubak & Nathan Collins In the last lecture we introduced randomized computation in terms
More informationCS601 DTIME and DSPACE Lecture 5. Time and Space functions: t,s : N N +
CS61 DTIME and DSPACE Lecture 5 Time and Space functions: t,s : N N + Definition 5.1 A set A U is in DTIME[t(n)] iff there exists a deterministic, multi-tape TM, M, and a constantc, such that, 1. A = L(M)
More informationLecture 2. 1 More N P-Compete Languages. Notes on Complexity Theory: Fall 2005 Last updated: September, Jonathan Katz
Notes on Complexity Theory: Fall 2005 Last updated: September, 2005 Jonathan Katz Lecture 2 1 More N P-Compete Languages It will be nice to find more natural N P-complete languages. To that end, we ine
More informationCSE 105 THEORY OF COMPUTATION
CSE 105 THEORY OF COMPUTATION "Winter" 2018 http://cseweb.ucsd.edu/classes/wi18/cse105-ab/ Today's learning goals Sipser Ch 7 Distinguish between computability and complexity Articulate motivation questions
More informationThe space complexity of a standard Turing machine. The space complexity of a nondeterministic Turing machine
298 8. Space Complexity The space complexity of a standard Turing machine M = (Q,,,, q 0, accept, reject) on input w is space M (w) = max{ uav : q 0 w M u q av, q Q, u, a, v * } The space complexity of
More informationLogarithmic space. Evgenij Thorstensen V18. Evgenij Thorstensen Logarithmic space V18 1 / 18
Logarithmic space Evgenij Thorstensen V18 Evgenij Thorstensen Logarithmic space V18 1 / 18 Journey below Unlike for time, it makes sense to talk about sublinear space. This models computations on input
More information13.1 Nondeterministic Polynomial Time
CS125 Lecture 13 Fall 2016 13.1 Nondeterministic Polynomial Time Now we turn to incorporating nondeterminism in universal models of computation, namely Turing Machines and Word-RAMs. For Nondeterministic
More informationLecture 10: Boolean Circuits (cont.)
CSE 200 Computability and Complexity Wednesday, May, 203 Lecture 0: Boolean Circuits (cont.) Instructor: Professor Shachar Lovett Scribe: Dongcai Shen Recap Definition (Boolean circuit) A Boolean circuit
More informationCould we potentially place A in a smaller complexity class if we consider other computational models?
Introduction to Complexity Theory Big O Notation Review Linear function: r(n) =O(n). Polynomial function: r(n) =2 O(1) Exponential function: r(n) =2 no(1) Logarithmic function: r(n) = O(log n) Poly-log
More informationTime-Space Tradeoffs for Satisfiability
Time-Space Tradeoffs for Satisfiability Lance Fortnow University of Chicago Department of Computer Science 1100 E. 58th. St. Chicago, IL 60637 Abstract We give the first nontrivial model-independent time-space
More informationAnnouncements. Friday Four Square! Problem Set 8 due right now. Problem Set 9 out, due next Friday at 2:15PM. Did you lose a phone in my office?
N P NP Completeness Announcements Friday Four Square! Today at 4:15PM, outside Gates. Problem Set 8 due right now. Problem Set 9 out, due next Friday at 2:15PM. Explore P, NP, and their connection. Did
More informationsatisfiability (sat) Satisfiability unsatisfiability (unsat or sat complement) and validity Any Expression φ Can Be Converted into CNFs and DNFs
Any Expression φ Can Be Converted into CNFs and DNFs φ = x j : This is trivially true. φ = φ 1 and a CNF is sought: Turn φ 1 into a DNF and apply de Morgan s laws to make a CNF for φ. φ = φ 1 and a DNF
More informationSPACE HIERARCHY RESULTS FOR RANDOMIZED AND OTHER SEMANTIC MODELS
SPACE HIERARCHY RESULTS FOR RANDOMIZED AND OTHER SEMANTIC MODELS Jeff Kinne and Dieter van Melkebeek Abstract. We prove space hierarchy and separation results for randomized and other semantic models of
More informationDefinition: Alternating time and space Game Semantics: State of machine determines who
CMPSCI 601: Recall From Last Time Lecture Definition: Alternating time and space Game Semantics: State of machine determines who controls, White wants it to accept, Black wants it to reject. White wins
More informationComparison of several polynomial and exponential time complexity functions. Size n
Comparison of several polynomial and exponential time complexity functions Time complexity function n n 2 n 3 n 5 2 n 3 n Size n 10 20 30 40 50 60.00001.00002.00003.00004.00005.00006 second second second
More information