IITM-CS6845: Theory Toolkit February 3, 2012
|
|
- Violet Hall
- 5 years ago
- Views:
Transcription
1 IITM-CS6845: Theory Toolkit February 3, 2012 Lecture 4 : Derandomizing the logspace algorithm for s-t connectivity Lecturer: N S Narayanaswamy Scribe: Mrinal Kumar Lecture Plan:In this lecture, we will see a derandomization of the logspace algorithm for s-t connectivity based upon random walks that we saw in the last lecture. The derandomization comes at the cost of more space. Our new algorithm uses log 2 (n) space and polynomial time. 1 Recall In the last lecture, we looked at an randomized log space algorithm for the following problem. Our input is an undirected graph G and a pair of vertices s, t V (G), and the question is to find out whether s is connected to t or not. Our algorithm was based on random walks and therefore had access to random bits at each step. Moreover, the algorithm ran in polynomial time and displayed one sided error. The aim of this lecture is to try and obtain a deterministic algorithm for this problem which uses the same amount of resources, space and time. The algorithm described here runs in polynomial time but uses poly logarithmic space. In the last lecture we also proved, ( using the Chebychev s inequality) the following lemma. Lemma 1. Let A {0, 1} m and B {0, 1} m and let H be the family of linear functions from {0, 1} m {0, 1} m defined by H = {ax + b a, b {0, 1} m } for some positive integer m. Let ɛ > 0. Then, P r h H [ P r x {0,1} m[x A h(x) B] ρ(a)ρ(b) ɛ] 1 2 m ɛ 2, where ρ(a) = A /2 m and ρ(b) = B /2 m. It should be observed that unlike in the case of deterministic log-space algorithms, which always run in polynomial time, a randomized logspace algorithm has considerably more computation power if it is allowed more than polynomial time. 4-1
2 2 Strategy Consider any bounded space randomized algorithm A which uses space s and runs in time t. We will use the word algorithm and machine alternatively for A. Let the algorithm use a maximum of r random bits in any run. Now, let us fix an input x and try to observe the run of the algorithm on this input as a function of random bits it uses. Since the machine uses a maximum of s space, the number of configurations that the machine can get into is bounded by 2 s. It is also worth noting that t 2 s as the algorithm always terminates. Again without loss of generality we can assume that the algorithm runs in exactly 2 s steps. Since our algorithm right now uses r random bits, so an obvious approach to derandomization would be to iterate over all bit strings of length r and keep a count of the number of time the algorithm accepts. Unfortunately, this blows up the running time exponentially. So, we try to construct a psedurandom source, which itself takes O(log(r)) random bits as input and in turn generates r 1 r bits, which our construction uses. Finally, we iterate over the choice of this log(r) bits to totally derandomize the procedure. The running time blow up is therefore controlled. To ensure the correctness of the construction, we need to prove that both the algorithms, namely A and our construction accept and reject the same set of strings. For this purpose, it is convenient to visualize the working of a bounded space algorithm in the following way. Consider the set of all possible configurations of this algorithm on the fixed input x as C. Now, construct the following 2 s 2 s square matrix M whose rows and columns are indexed by the elements in C. The element M i,j represents the probability of the algorithm going from configuration i to configuration j on when it is given access to a random bit x chosen uniformly at random. Without loss of generality, it can be assumed that the starting, accepting and rejecting configurations of A on input x are unique and let them be labelled as in, ac and rej respectively. The probability that the A accepts x in one step is given by M in,ac. Similarly, it can also be observed that the probability of A acepting x is two steps will be given by Min,ac 2. Inductively proceeding like this, for any t, the probability that M accepts x in t steps is given by Min,ac t. The above matrix multiplication model gives us an alternative way of looking at the run of any bounded space randomized algorithm and we will use precisely this model for our derandomization. Whether or not x is accepted by the algorithm is given by the entries of the matrix Min,ac t. If we provide some alternative distribution of random bits to A, the entries of the matrix described above will change. Let us say that A instead of being provided a string chosen uniformly at random from {0, 1} r is provided a string from some other distribution such that we obtain a matrix D at step t such that D and M t are not too different from each other in terms of their entries. Hence, we will use D to solve the problem deterministically. We should recall that since A is a randomized algorithm making one sided error, it accepts x with a probability bounded away from half when x is in the language and rejects with probability 1 otherwise. So, in trying to come up with D, if we manage to make sure that the individual elements of M t and D differ only in a way that the entries which retain their location on the real line with respect 4-2
3 to 1/2, the output of D and the output of A remain the same. 3 Construction 3.1 Notation Let us fix the following notation for this section. 1. The norm of a vector x R n is defined as x = i x i and for a real square matrix of dimension n, the norm of matrix M denoted by M is defined as M = max Mx x R n x where x is a non null vector. We will be using the following properties of the matrix norm at various places during the course of analysis. For two vectors x 1 and x 2 in R n, x 1 + x 2 x 1 + x 2. The proof follows from definition and an easy application of the properties of the real numbers under modulas. M + N M + N Proof. (M + N)x M + N = max x R n x max x R n( Mx x (1) + Nx ) (2) x M + N (3) Here (1) and (3) follow from the definiton of the norm as defined above while (2) follows due to property 1 proved above. MN M N Proof. Let n be the norm of N and m be the norm of M. Now, let x be an arbitrary vector in R n such that Nx = y. Now, MNx = My m y, by definition of norm. Similarly, y = N x n x. coupling the two together, we have MNx mn x. Therefore, MN mn. For a stochastic matrix, where the rows sum up to 1, M =
4 Proof. To show the above statement, we show the following. For any vector x = (x 1, x 2... x n ) and a stochastic matrix M, we will show that the norm is upper bounded by 1. Then, if we can come up with a vector x such that Mx / x = 1, we are done. It is easy to verify that the all ones vetctor (1, 1,... 1) is such a vector. For the first part, consider the i th element of the vector Mx. It is equal to the sum j M ijx j j M ij x j j M ij x x. Hence, Mx / x 1. For a matrix with each entry bounded by ɛ, M nɛ where M is a n n matrix. Proof. For any vector x and any matrix M whose each entry is bounded above by ɛ, Mx i = j M ijx j j ɛx j j ɛ x j j ɛ x nɛ x. Hence, the norm of M is bounded above by nɛ. 2. For a transition probability matrix M for any randomized algorithm and a given distribution D, M D [i, j] is the probability that the machine goes from the configuration i to configuration j when given access to a string randomly chosen acording to D. 3. U r represents the uniform distribution on {0, 1} r. 4. M represents the transition probability matrix over the configurations of the randomized algorithm that we are trying to derandomize. We will ideally try to estimate the entries of the matrix M t where the machine runs in time t and the random bits are chosen uniformly at random. 3.2 Pseudorandom generator We use the following generator to obtain the pseudorandom string used in our simulation. Definition 2. For positive integers m, k, we define the function G k : {0, 1} m H k {0, 1} m2k where H is the family of all linear functions {0, 1} m {0, 1} m defined by H = {ax+b a, b {0, 1} m }. The function is recursively defined as follows. Base case: G 0 (x) = x Recursion: G k (x, h 1, h 2... h k ) = G k 1 (x, h 1... h k 1 )og k 1 (h k (x), h 1... h k 1 ), where o represents concatenation of two strings. The first few strings computed by the function above will be as follows. 1. G 0 (x) = x 4-4
5 2. G 1 (x, h 1 (x)) = xh 1 (x) 3. G 2 (x, h 1 (x), h 2 (x)) = xh 1 (x)h 2 (x)h 1 (h 2 (x)) Intuitively, G k will provide us the pseudorandom string to be used in the k th step of our recursive construction. The idea is to carefully chose the hash functions h 1, h 2... h k such that our randomized algorithm can not distinguish between a random string from U m2 k and the one provided by the generator G k. We need two more definitions before we can proceed to the algorithm. Definition 3. For h 1, h 2... h k H, D h1,h 2...h k denotes the probability distribution of G k (x, h 1, h 2... h + k), when x is chosen uniformly at random from {0, 1} m. We denote M Dh1,h 2...h k by M h1,h 2...h k. Definition 4. Given a transition probability matrix M h1,h 2...h k 1 and a hash function h H, h is said to be ɛ good for h 1, h 2... h k, if Mh 2 1,h 2...h k 1 M h1,h 2...h k ɛ. The above definition essentially denotes the error in the transition probabilities whih will occur if the machine is run for two steps by providing it with half the number of random bits than required. 3.3 Algorithms The main loop 1. Input: The transition probabiltiy matrix of a log space randomised algorithm denoted by M (M is an n n square matrix) which uses r random bits where r n. The algorithm runs in time t n for any input. 2. Output: To obtain a matrix N, such that N h1,h 2...h k M t ɛ for some given constant ɛ. Let K = logt, δ = ɛ 2 K, m = O(log(n)), t 1 = m2 K For k {1, 2... K} : (a) Call the find hash function routine to compute a hash function h k which is δ good for the h 1, h 2... h k 1. For all pairs of configurations i, j, compute N[i, j] (a) Call the compute value routine to compute the value of N[i, j] = M h1,h 2...h K [i, j] 4-5
6 Find Hash Function 1. Input: Hash functions h 1, h 2... h k 1 and a rational number δ > Output: A hash function h k which is δ good for h 1, h 2... h k 1 For every h H do the following: (a) For each pair of configurations i, j, compute λ = M h1,h 2,h 3...h k [i, j] Mh 2 1,h 2...h k 1 [i, j] using calls to the routine Compute Values described below. (b) If λ > δ n, then this choice of h is a bad choice. Go to the next h H. Else return h. Compute Values 1. Input: hash functions h 1, h 2... h k, a pair of configurations i, j. 2. Output: M h1,h 2...h k [i, j] For every string x {0, 1} m, simulate the algorithm starting with configuration i and random bits being read from the string G k (x, h 1, h 2... h k ). Keep a count of strings x such that the final state is j (say λ). Return M h1,h 2...h k[i, j] = λ 2 m. It is worth noting that the length of the string G k (x, h 1, h 2... h k ) is polynomial in n and hence can notbe stored on the tape as it would violate our desired space bound of log 2 (n). Therefore, we do not compute the whole string and write it down. We compute a bit only when it is needed during the run of the algorithm. It is easy to observe that this computation can be done m bit at a time using the recursive definition of G k as defined in the previous section. 4 Resources used by the algorithm The space utilized by the algorithm canbe bounded by O(log(t)m) as that is the amount of space required to store the hash functions h 1, h 2... h K. All other space used in the algorithm can be bounded by O(m). Hence the total space used by the algorithm is at most O(log 2 (n)). For the ananlysis of running time, we observe the following. 1. The main loop runs K = O(n) times. 4-6
7 2. Each call to the routine Find hash function takes time O(2 2m n 2 n (time taken to call the function Compute values)). 3. each call to the routine Compute values takes time which is O(2 m n m 2 2 k ). 4. Therefore, the total time taken by the algorithm takes time which can be bounded by some sufficiently large polynomial in n. Thus our algorithm runs in polynomial time and takes log 2 (n) space. 5 The proof of correctness We begin with proving the following claim. Claim 5. For any sequence of functions h 1, h 2... h k 2, there always exists a function h H such that for every i and j: M 2 h 1,h 2...h k 1 [i, j] M h1,h 2...h k 1,h[i, j] δ n Proof. Let R denote the stochastic matrix M h1,h 2...h k 1. Let the machine use an m bit random string in one step. An entry of R say R[l, f] is esentially the fraction of random strings which take the probabilistic machine from state l to state f. Similarly an entry, R 2 [l, f] essentially denotes the fraction of strings which takes from some state a to some state b in two steps. This, by definition is equal to the product of fraction of string which takes to the machine to some intermediate state a from l in one step and then takes it from l to f for any intermediate state l. Let us call the set of strings as good strings. Now, the probabiltiy of the machine going from state l to state f in two steps is essentially equal to the probabiltiy that the machine gets a good string in the first step and it gets a good string in the second step. If the set of good strings for the first step are denoted y A 1 and the set of good strings for the second step are denoted by A 2, then the entry R 2 [l, f] = A 1 A 2 = ρ(a)ρ(b). We formalize this argument now. Let, for a pair 2 2m of configurations (or states) i and j, A i,j = {x {0, 1} m G k (x, h 1, h 2... h k 1 ) takes the machine from state i to state j }. Therefore, M h1,h 2...h k 1 [i, j] = ρ(a ij ). Similarly, from the definition of G k M h1,h 2...h k 1,h[i, j] = l P r x {0,1} m [x A il and h(x) A l j]. For any fixed triplet of states i, j, l, using the lemma 1 we know that for a h which is chosen uniformly at random from the set H and for any epsilon, the following holds 4-7
8 P r h H [ P r x {0,1} m[x A h(x) B] ρ(a)ρ(b) ɛ] 1 2 m ɛ 2 Therefore, by substituting ɛ = δ, for a random h, with a probabiltiy at least 1 n4, the n 2 2 m δ 2 following is true. P r x {0,1} m[x A h(x) B] ρ(a)ρ(b) δ n 2 ] By using the union bound over all triplets i, j, l, we can conclude that a random h satisfies the above expression with probabiltiy at least 1 2. Summing over the intermediate states l, which are at most n in number and using the probabilisti method, the claim follows. Using the claim proved above and the properties of the matrix norm earlier defined, we prove the following cruicial lemma. Lemma 6. The routine Find Hash Function always returns a function h which is δ good for h 1, h 2... h k 1. Proof. From the definition of a function being δ good, we need to bound the norm of the matrix M 2 h 1,h 2...h k 1 M h1,h 2...h k by δ. Now, if we see the claim above, every entry of this matrix is bounded by δ n. Therefore, from the property of the norm defined, the norm of the matrix is at most n δ n which is equal to δ. We now prove the final lemma which ensures that the matrix M h1,h 2...h k closely approximates the transition probability matrix MU t, which exxentially completes our proof. m2 k Lemma 7. If for every 1 z k, h z is a δ good for h 1, h 2... h r 1, then M h1,h 2...h k M t U m2 k (2k 1)δ. Proof. The proof follows by induction on k. The base case is trivial as we iterate over all x {0, 1} m. For the induction step, we assume that the statement holds for k 1. Using the triangle inequality which our matrix norm satisfies, M h1,h 2...h k M M Um2 k h 1,h 2...h k Mh 2 1,h 2...h k 1 + Mh 2 1,h 2...h k 1 MU t. Since h m2 k k is δ good for h 1, h 2... h k 1, the first term in the sum above is bounded by δ. Now we bound the second term of the sum. We can observe t/2 = (MU )2. So, M 2 m2 k 1 h 1,h 2...h k 1 MU t M m2 k h 1,h 2...h k 1 M h1,h 2...h k 1 that MU t m2 k M t/2 U + M h m2 k 1 1,h 2...h k 1 M t/2 U m2 k 1 t/2 MU. Since each of these matrices are stochastic m2 k 1 in nature, their individual norms are bounded by 1 and the difference term occuring in the expansion above is bounded by (2 k 1 1)δ from the induction hypothesis. Substituting these bounds back, the lemma follows. 4-8
9 ɛ Since the value of δ was chosen to be, substituting back into the above lemma with 2 K k = K, we have the following theorem. Theorem 8. The algorithm computes a matrix N such that N M t U m2 k ɛ. 6 Tools used We used the following major tools that we have seen in previous lectures of this course, at different places in this construction and the proof of correctness. 1. Chebychev s Inequality. 2. Linearity of Expectations. 3. The Union Bound 7 References Noam Nisan: RL SC STOC 1992 : Noam Nisan: Psuedorandom Generators for Space-Bounded Computation STOC 1990 :
Notes on Complexity Theory Last updated: December, Lecture 27
Notes on Complexity Theory Last updated: December, 2011 Jonathan Katz Lecture 27 1 Space-Bounded Derandomization We now discuss derandomization of space-bounded algorithms. Here non-trivial results can
More informationPRGs for space-bounded computation: INW, Nisan
0368-4283: Space-Bounded Computation 15/5/2018 Lecture 9 PRGs for space-bounded computation: INW, Nisan Amnon Ta-Shma and Dean Doron 1 PRGs Definition 1. Let C be a collection of functions C : Σ n {0,
More information1 Randomized Computation
CS 6743 Lecture 17 1 Fall 2007 1 Randomized Computation Why is randomness useful? Imagine you have a stack of bank notes, with very few counterfeit ones. You want to choose a genuine bank note to pay at
More information15-855: Intensive Intro to Complexity Theory Spring Lecture 16: Nisan s PRG for small space
15-855: Intensive Intro to Complexity Theory Spring 2009 Lecture 16: Nisan s PRG for small space For the next few lectures we will study derandomization. In algorithms classes one often sees clever randomized
More information6.842 Randomness and Computation March 3, Lecture 8
6.84 Randomness and Computation March 3, 04 Lecture 8 Lecturer: Ronitt Rubinfeld Scribe: Daniel Grier Useful Linear Algebra Let v = (v, v,..., v n ) be a non-zero n-dimensional row vector and P an n n
More informationLecture 17. In this lecture, we will continue our discussion on randomization.
ITCS:CCT09 : Computational Complexity Theory May 11, 2009 Lecturer: Jayalal Sarma M.N. Lecture 17 Scribe: Hao Song In this lecture, we will continue our discussion on randomization. 1 BPP and the Polynomial
More informationLecture 12: Randomness Continued
CS 710: Complexity Theory 2/25/2010 Lecture 12: Randomness Continued Instructor: Dieter van Melkebeek Scribe: Beth Skubak & Nathan Collins In the last lecture we introduced randomized computation in terms
More information20.1 2SAT. CS125 Lecture 20 Fall 2016
CS125 Lecture 20 Fall 2016 20.1 2SAT We show yet another possible way to solve the 2SAT problem. Recall that the input to 2SAT is a logical expression that is the conunction (AND) of a set of clauses,
More informationNotes on Space-Bounded Complexity
U.C. Berkeley CS172: Automata, Computability and Complexity Handout 7 Professor Luca Trevisan April 14, 2015 Notes on Space-Bounded Complexity These are notes for CS278, Computational Complexity, scribed
More informationU.C. Berkeley CS278: Computational Complexity Professor Luca Trevisan 9/6/2004. Notes for Lecture 3
U.C. Berkeley CS278: Computational Complexity Handout N3 Professor Luca Trevisan 9/6/2004 Notes for Lecture 3 Revised 10/6/04 1 Space-Bounded Complexity Classes A machine solves a problem using space s(
More informationNotes on Space-Bounded Complexity
U.C. Berkeley CS172: Automata, Computability and Complexity Handout 6 Professor Luca Trevisan 4/13/2004 Notes on Space-Bounded Complexity These are notes for CS278, Computational Complexity, scribed by
More informationLecture Notes Each circuit agrees with M on inputs of length equal to its index, i.e. n, x {0, 1} n, C n (x) = M(x).
CS 221: Computational Complexity Prof. Salil Vadhan Lecture Notes 4 February 3, 2010 Scribe: Jonathan Pines 1 Agenda P-/NP- Completeness NP-intermediate problems NP vs. co-np L, NL 2 Recap Last time, we
More informationCSC 5170: Theory of Computational Complexity Lecture 9 The Chinese University of Hong Kong 15 March 2010
CSC 5170: Theory of Computational Complexity Lecture 9 The Chinese University of Hong Kong 15 March 2010 We now embark on a study of computational classes that are more general than NP. As these classes
More informationLecture Space Bounded and Random Turing Machines and RL
6.84 Randomness and Computation October 11, 017 Lecture 10 Lecturer: Ronitt Rubinfeld Scribe: Tom Kolokotrones 1 Space Bounded and Random Turing Machines and RL 1.1 Space Bounded and Random Turing Machines
More informationCSC 2429 Approaches to the P vs. NP Question and Related Complexity Questions Lecture 2: Switching Lemma, AC 0 Circuit Lower Bounds
CSC 2429 Approaches to the P vs. NP Question and Related Complexity Questions Lecture 2: Switching Lemma, AC 0 Circuit Lower Bounds Lecturer: Toniann Pitassi Scribe: Robert Robere Winter 2014 1 Switching
More informationLecture 23: Alternation vs. Counting
CS 710: Complexity Theory 4/13/010 Lecture 3: Alternation vs. Counting Instructor: Dieter van Melkebeek Scribe: Jeff Kinne & Mushfeq Khan We introduced counting complexity classes in the previous lecture
More information1 Maintaining a Dictionary
15-451/651: Design & Analysis of Algorithms February 1, 2016 Lecture #7: Hashing last changed: January 29, 2016 Hashing is a great practical tool, with an interesting and subtle theory too. In addition
More informationLecture 59 : Instance Compression and Succinct PCP s for NP
IITM-CS6840: Advanced Complexity Theory March 31, 2012 Lecture 59 : Instance Compression and Succinct PCP s for NP Lecturer: Sivaramakrishnan N.R. Scribe: Prashant Vasudevan 1 Introduction Classical Complexity
More informationCSC 5170: Theory of Computational Complexity Lecture 4 The Chinese University of Hong Kong 1 February 2010
CSC 5170: Theory of Computational Complexity Lecture 4 The Chinese University of Hong Kong 1 February 2010 Computational complexity studies the amount of resources necessary to perform given computations.
More informationCS 6820 Fall 2014 Lectures, October 3-20, 2014
Analysis of Algorithms Linear Programming Notes CS 6820 Fall 2014 Lectures, October 3-20, 2014 1 Linear programming The linear programming (LP) problem is the following optimization problem. We are given
More informationNotes for Lecture Notes 2
Stanford University CS254: Computational Complexity Notes 2 Luca Trevisan January 11, 2012 Notes for Lecture Notes 2 In this lecture we define NP, we state the P versus NP problem, we prove that its formulation
More information1 Locally computable randomized encodings
CSG399: Gems of Theoretical Computer Science Lectures 3-4 Feb 20-24, 2009 Instructor: Emanuele Viola Scribe: Eric Miles Cryptography in constant depth: II & III Locally computable randomized encodings
More informationOn Pseudorandomness w.r.t Deterministic Observers
On Pseudorandomness w.r.t Deterministic Observers Oded Goldreich Department of Computer Science Weizmann Institute of Science Rehovot, Israel. oded@wisdom.weizmann.ac.il Avi Wigderson The Hebrew University,
More informationCS Communication Complexity: Applications and New Directions
CS 2429 - Communication Complexity: Applications and New Directions Lecturer: Toniann Pitassi 1 Introduction In this course we will define the basic two-party model of communication, as introduced in the
More information1 Computational Problems
Stanford University CS254: Computational Complexity Handout 2 Luca Trevisan March 31, 2010 Last revised 4/29/2010 In this lecture we define NP, we state the P versus NP problem, we prove that its formulation
More informationThe Simplex Algorithm
8.433 Combinatorial Optimization The Simplex Algorithm October 6, 8 Lecturer: Santosh Vempala We proved the following: Lemma (Farkas). Let A R m n, b R m. Exactly one of the following conditions is true:.
More informationPseudorandom Generators
8 Pseudorandom Generators Great Ideas in Theoretical Computer Science Saarland University, Summer 2014 andomness is one of the fundamental computational resources and appears everywhere. In computer science,
More informationQuantum Computing Lecture 8. Quantum Automata and Complexity
Quantum Computing Lecture 8 Quantum Automata and Complexity Maris Ozols Computational models and complexity Shor s algorithm solves, in polynomial time, a problem for which no classical polynomial time
More informationLecture 2: January 18
CS271 Randomness & Computation Spring 2018 Instructor: Alistair Sinclair Lecture 2: January 18 Disclaimer: These notes have not been subjected to the usual scrutiny accorded to formal publications. They
More informationUmans Complexity Theory Lectures
Umans Complexity Theory Lectures Lecture 8: Introduction to Randomized Complexity: - Randomized complexity classes, - Error reduction, - in P/poly - Reingold s Undirected Graph Reachability in RL Randomized
More informationRandomized Time Space Tradeoffs for Directed Graph Connectivity
Randomized Time Space Tradeoffs for Directed Graph Connectivity Parikshit Gopalan, Richard Lipton, and Aranyak Mehta College of Computing, Georgia Institute of Technology, Atlanta GA 30309, USA. Email:
More informationAdvanced topic: Space complexity
Advanced topic: Space complexity CSCI 3130 Formal Languages and Automata Theory Siu On CHAN Chinese University of Hong Kong Fall 2016 1/28 Review: time complexity We have looked at how long it takes to
More informationCS294: Pseudorandomness and Combinatorial Constructions September 13, Notes for Lecture 5
UC Berkeley Handout N5 CS94: Pseudorandomness and Combinatorial Constructions September 3, 005 Professor Luca Trevisan Scribe: Gatis Midrijanis Notes for Lecture 5 In the few lectures we are going to look
More informationLecture 7: Pseudo Random Generators
Introduction to ryptography 02/06/2018 Lecture 7: Pseudo Random Generators Instructor: Vipul Goyal Scribe: Eipe Koshy 1 Introduction Randomness is very important in modern computational systems. For example,
More informationLecture 4 : Quest for Structure in Counting Problems
CS6840: Advanced Complexity Theory Jan 10, 2012 Lecture 4 : Quest for Structure in Counting Problems Lecturer: Jayalal Sarma M.N. Scribe: Dinesh K. Theme: Between P and PSPACE. Lecture Plan:Counting problems
More information2 Completing the Hardness of approximation of Set Cover
CSE 533: The PCP Theorem and Hardness of Approximation (Autumn 2005) Lecture 15: Set Cover hardness and testing Long Codes Nov. 21, 2005 Lecturer: Venkat Guruswami Scribe: Atri Rudra 1 Recap We will first
More informationLecture 5: Derandomization (Part II)
CS369E: Expanders May 1, 005 Lecture 5: Derandomization (Part II) Lecturer: Prahladh Harsha Scribe: Adam Barth Today we will use expanders to derandomize the algorithm for linearity test. Before presenting
More informationMin/Max-Poly Weighting Schemes and the NL vs UL Problem
Min/Max-Poly Weighting Schemes and the NL vs UL Problem Anant Dhayal Jayalal Sarma Saurabh Sawlani May 3, 2016 Abstract For a graph G(V, E) ( V = n) and a vertex s V, a weighting scheme (w : E N) is called
More informationCSE200: Computability and complexity Space Complexity
CSE200: Computability and complexity Space Complexity Shachar Lovett January 29, 2018 1 Space complexity We would like to discuss languages that may be determined in sub-linear space. Lets first recall
More information6.842 Randomness and Computation Lecture 5
6.842 Randomness and Computation 2012-02-22 Lecture 5 Lecturer: Ronitt Rubinfeld Scribe: Michael Forbes 1 Overview Today we will define the notion of a pairwise independent hash function, and discuss its
More informationPseudorandom Generators for Regular Branching Programs
Pseudorandom Generators for Regular Branching Programs Mark Braverman Anup Rao Ran Raz Amir Yehudayoff Abstract We give new pseudorandom generators for regular read-once branching programs of small width.
More information6.841/18.405J: Advanced Complexity Wednesday, February 12, Lecture Lecture 3
6.841/18.405J: Advanced Complexity Wednesday, February 12, 2003 Lecture Lecture 3 Instructor: Madhu Sudan Scribe: Bobby Kleinberg 1 The language MinDNF At the end of the last lecture, we introduced the
More informationPseudorandomness for permutation and regular branching programs
Pseudorandomness for permutation and regular branching programs Anindya De March 6, 2013 Abstract In this paper, we prove the following two results about the INW pseudorandom generator It fools constant
More informationEquivalence of Regular Expressions and FSMs
Equivalence of Regular Expressions and FSMs Greg Plaxton Theory in Programming Practice, Spring 2005 Department of Computer Science University of Texas at Austin Regular Language Recall that a language
More informationLecture 6: Random Walks versus Independent Sampling
Spectral Graph Theory and Applications WS 011/01 Lecture 6: Random Walks versus Independent Sampling Lecturer: Thomas Sauerwald & He Sun For many problems it is necessary to draw samples from some distribution
More informationLecture 3. 1 Terminology. 2 Non-Deterministic Space Complexity. Notes on Complexity Theory: Fall 2005 Last updated: September, 2005.
Notes on Complexity Theory: Fall 2005 Last updated: September, 2005 Jonathan Katz Lecture 3 1 Terminology For any complexity class C, we define the class coc as follows: coc def = { L L C }. One class
More informationPseudorandom Generators for Group Products
Pseudorandom Generators for Group Products preliminary version Michal Koucký, Prajakta Nimbhorkar and Pavel Pudlák July 16, 2010 Abstract We prove that the pseudorandom generator introduced in [INW94]
More informationProblem Set 2. Assigned: Mon. November. 23, 2015
Pseudorandomness Prof. Salil Vadhan Problem Set 2 Assigned: Mon. November. 23, 2015 Chi-Ning Chou Index Problem Progress 1 SchwartzZippel lemma 1/1 2 Robustness of the model 1/1 3 Zero error versus 1-sided
More informationLecture 22: Counting
CS 710: Complexity Theory 4/8/2010 Lecture 22: Counting Instructor: Dieter van Melkebeek Scribe: Phil Rydzewski & Chi Man Liu Last time we introduced extractors and discussed two methods to construct them.
More informationUniversal Traversal Sequences for Expander Graphs
Universal Traversal Sequences for Expander Graphs Shlomo Hoory Avi Wigderson Hebrew University, Jerusalem Keywords: Universal traversal sequence, space-bounded computation, explicit construction, connectivity,
More informationLecture 8: Complete Problems for Other Complexity Classes
IAS/PCMI Summer Session 2000 Clay Mathematics Undergraduate Program Basic Course on Computational Complexity Lecture 8: Complete Problems for Other Complexity Classes David Mix Barrington and Alexis Maciel
More informationNP, polynomial-time mapping reductions, and NP-completeness
NP, polynomial-time mapping reductions, and NP-completeness In the previous lecture we discussed deterministic time complexity, along with the time-hierarchy theorem, and introduced two complexity classes:
More informationNotes on Computer Theory Last updated: November, Circuits
Notes on Computer Theory Last updated: November, 2015 Circuits Notes by Jonathan Katz, lightly edited by Dov Gordon. 1 Circuits Boolean circuits offer an alternate model of computation: a non-uniform one
More informationLecture 3: Oct 7, 2014
Information and Coding Theory Autumn 04 Lecturer: Shi Li Lecture : Oct 7, 04 Scribe: Mrinalkanti Ghosh In last lecture we have seen an use of entropy to give a tight upper bound in number of triangles
More information15.1 Proof of the Cook-Levin Theorem: SAT is NP-complete
CS125 Lecture 15 Fall 2016 15.1 Proof of the Cook-Levin Theorem: SAT is NP-complete Already know SAT NP, so only need to show SAT is NP-hard. Let L be any language in NP. Let M be a NTM that decides L
More informationLecture #21. c T x Ax b. maximize subject to
COMPSCI 330: Design and Analysis of Algorithms 11/11/2014 Lecture #21 Lecturer: Debmalya Panigrahi Scribe: Samuel Haney 1 Overview In this lecture, we discuss linear programming. We first show that the
More informationPCPs and Inapproximability Gap-producing and Gap-Preserving Reductions. My T. Thai
PCPs and Inapproximability Gap-producing and Gap-Preserving Reductions My T. Thai 1 1 Hardness of Approximation Consider a maximization problem Π such as MAX-E3SAT. To show that it is NP-hard to approximation
More informationLecture 5. 1 Review (Pairwise Independence and Derandomization)
6.842 Randomness and Computation September 20, 2017 Lecture 5 Lecturer: Ronitt Rubinfeld Scribe: Tom Kolokotrones 1 Review (Pairwise Independence and Derandomization) As we discussed last time, we can
More informationPseudorandom Generators for Regular Branching Programs
Pseudorandom Generators for Regular Branching Programs Mark Braverman Anup Rao Ran Raz Amir Yehudayoff Abstract We give new pseudorandom generators for regular read-once branching programs of small width.
More informationNotes on Complexity Theory Last updated: November, Lecture 10
Notes on Complexity Theory Last updated: November, 2015 Lecture 10 Notes by Jonathan Katz, lightly edited by Dov Gordon. 1 Randomized Time Complexity 1.1 How Large is BPP? We know that P ZPP = RP corp
More informationLecture 1 : Probabilistic Method
IITM-CS6845: Theory Jan 04, 01 Lecturer: N.S.Narayanaswamy Lecture 1 : Probabilistic Method Scribe: R.Krithika The probabilistic method is a technique to deal with combinatorial problems by introducing
More informationNotes for Lecture 3... x 4
Stanford University CS254: Computational Complexity Notes 3 Luca Trevisan January 18, 2012 Notes for Lecture 3 In this lecture we introduce the computational model of boolean circuits and prove that polynomial
More informationImpagliazzo s Hardcore Lemma
Average Case Complexity February 8, 011 Impagliazzo s Hardcore Lemma ofessor: Valentine Kabanets Scribe: Hoda Akbari 1 Average-Case Hard Boolean Functions w.r.t. Circuits In this lecture, we are going
More informationBy allowing randomization in the verification process, we obtain a class known as MA.
Lecture 2 Tel Aviv University, Spring 2006 Quantum Computation Witness-preserving Amplification of QMA Lecturer: Oded Regev Scribe: N. Aharon In the previous class, we have defined the class QMA, which
More informationAditya Bhaskara CS 5968/6968, Lecture 1: Introduction and Review 12 January 2016
Lecture 1: Introduction and Review We begin with a short introduction to the course, and logistics. We then survey some basics about approximation algorithms and probability. We also introduce some of
More information1.10 Matrix Representation of Graphs
42 Basic Concepts of Graphs 1.10 Matrix Representation of Graphs Definitions: In this section, we introduce two kinds of matrix representations of a graph, that is, the adjacency matrix and incidence matrix
More informationCPSC 320 (Intermediate Algorithm Design and Analysis). Summer Instructor: Dr. Lior Malka Final Examination, July 24th, 2009
CPSC 320 (Intermediate Algorithm Design and Analysis). Summer 2009. Instructor: Dr. Lior Malka Final Examination, July 24th, 2009 Student ID: INSTRUCTIONS: There are 6 questions printed on pages 1 7. Exam
More informationRandomness and non-uniformity
Randomness and non-uniformity JASS 2006 Course 1: Proofs and Computers Felix Weninger TU München April 2006 Outline Randomized computation 1 Randomized computation 2 Computation with advice Non-uniform
More informationTheoretical Cryptography, Lectures 18-20
Theoretical Cryptography, Lectures 18-20 Instructor: Manuel Blum Scribes: Ryan Williams and Yinmeng Zhang March 29, 2006 1 Content of the Lectures These lectures will cover how someone can prove in zero-knowledge
More informationcompare to comparison and pointer based sorting, binary trees
Admin Hashing Dictionaries Model Operations. makeset, insert, delete, find keys are integers in M = {1,..., m} (so assume machine word size, or unit time, is log m) can store in array of size M using power:
More informationLecture 12: Lower Bounds for Element-Distinctness and Collision
Quantum Computation (CMU 18-859BB, Fall 015) Lecture 1: Lower Bounds for Element-Distinctness and Collision October 19, 015 Lecturer: John Wright Scribe: Titouan Rigoudy 1 Outline In this lecture, we will:
More informationCSC 5170: Theory of Computational Complexity Lecture 5 The Chinese University of Hong Kong 8 February 2010
CSC 5170: Theory of Computational Complexity Lecture 5 The Chinese University of Hong Kong 8 February 2010 So far our notion of realistic computation has been completely deterministic: The Turing Machine
More informationLecture : PSPACE IP
IITM-CS6845: Theory Toolkit February 16, 2012 Lecture 22-23 : PSPACE IP Lecturer: Jayalal Sarma.M.N. Scribe: Sivaramakrishnan.N.R. Theme: Between P and PSPACE 1 Interactive Protocol for #SAT In the previous
More informationTheory of Computation
Thomas Zeugmann Hokkaido University Laboratory for Algorithmics http://www-alg.ist.hokudai.ac.jp/ thomas/toc/ Lecture 3: Finite State Automata Motivation In the previous lecture we learned how to formalize
More informationOn Recycling the Randomness of the States in Space Bounded Computation
On Recycling the Randomness of the States in Space Bounded Computation Preliminary Version Ran Raz Omer Reingold Abstract Let M be a logarithmic space Turing machine (or a polynomial width branching program)
More informationOutline. Complexity Theory. Example. Sketch of a log-space TM for palindromes. Log-space computations. Example VU , SS 2018
Complexity Theory Complexity Theory Outline Complexity Theory VU 181.142, SS 2018 3. Logarithmic Space Reinhard Pichler Institute of Logic and Computation DBAI Group TU Wien 3. Logarithmic Space 3.1 Computational
More informationIntroduction Long transparent proofs The real PCP theorem. Real Number PCPs. Klaus Meer. Brandenburg University of Technology, Cottbus, Germany
Santaló s Summer School, Part 3, July, 2012 joint work with Martijn Baartse (work supported by DFG, GZ:ME 1424/7-1) Outline 1 Introduction 2 Long transparent proofs for NP R 3 The real PCP theorem First
More informationLecture 5: January 30
CS71 Randomness & Computation Spring 018 Instructor: Alistair Sinclair Lecture 5: January 30 Disclaimer: These notes have not been subjected to the usual scrutiny accorded to formal publications. They
More informationLecture 12 : Recurrences DRAFT
CS/Math 240: Introduction to Discrete Mathematics 3/1/2011 Lecture 12 : Recurrences Instructor: Dieter van Melkebeek Scribe: Dalibor Zelený DRAFT Last few classes we talked about program correctness. We
More informationCS5371 Theory of Computation. Lecture 18: Complexity III (Two Classes: P and NP)
CS5371 Theory of Computation Lecture 18: Complexity III (Two Classes: P and NP) Objectives Define what is the class P Examples of languages in P Define what is the class NP Examples of languages in NP
More informationCS6999 Probabilistic Methods in Integer Programming Randomized Rounding Andrew D. Smith April 2003
CS6999 Probabilistic Methods in Integer Programming Randomized Rounding April 2003 Overview 2 Background Randomized Rounding Handling Feasibility Derandomization Advanced Techniques Integer Programming
More informationLecture 24: Approximate Counting
CS 710: Complexity Theory 12/1/2011 Lecture 24: Approximate Counting Instructor: Dieter van Melkebeek Scribe: David Guild and Gautam Prakriya Last time we introduced counting problems and defined the class
More informationLecture 2: Continued fractions, rational approximations
Lecture 2: Continued fractions, rational approximations Algorithmic Number Theory (Fall 204) Rutgers University Swastik Kopparty Scribe: Cole Franks Continued Fractions We begin by calculating the continued
More informationUniversality for Nondeterministic Logspace
Universality for Nondeterministic Logspace Vinay Chaudhary, Anand Kumar Sinha, and Somenath Biswas Department of Computer Science and Engineering IIT Kanpur September 2004 1 Introduction The notion of
More informationCMPUT 675: Approximation Algorithms Fall 2014
CMPUT 675: Approximation Algorithms Fall 204 Lecture 25 (Nov 3 & 5): Group Steiner Tree Lecturer: Zachary Friggstad Scribe: Zachary Friggstad 25. Group Steiner Tree In this problem, we are given a graph
More informationPseudorandomness for permutation and regular branching programs
Pseudorandomness for permutation and regular branching programs Anindya De Computer Science Division University of California, Berkeley Berkeley, CA 94720, USA anindya@cs.berkeley.edu Abstract In this
More informationDynamic Programming: Shortest Paths and DFA to Reg Exps
CS 374: Algorithms & Models of Computation, Spring 207 Dynamic Programming: Shortest Paths and DFA to Reg Exps Lecture 8 March 28, 207 Chandra Chekuri (UIUC) CS374 Spring 207 / 56 Part I Shortest Paths
More informationNotes for Lecture 18
U.C. Berkeley Handout N18 CS294: Pseudorandomness and Combinatorial Constructions November 1, 2005 Professor Luca Trevisan Scribe: Constantinos Daskalakis Notes for Lecture 18 1 Basic Definitions In the
More informationApropos of an errata in ÜB 10 exercise 3
Apropos of an errata in ÜB 10 exercise 3 Komplexität von Algorithmen SS13 The last exercise of the last exercise sheet was incorrectly formulated and could not be properly solved. Since no one spotted
More informationLanguages, regular languages, finite automata
Notes on Computer Theory Last updated: January, 2018 Languages, regular languages, finite automata Content largely taken from Richards [1] and Sipser [2] 1 Languages An alphabet is a finite set of characters,
More informationLecture 1 : Data Compression and Entropy
CPS290: Algorithmic Foundations of Data Science January 8, 207 Lecture : Data Compression and Entropy Lecturer: Kamesh Munagala Scribe: Kamesh Munagala In this lecture, we will study a simple model for
More informationCS286.2 Lecture 8: A variant of QPCP for multiplayer entangled games
CS286.2 Lecture 8: A variant of QPCP for multiplayer entangled games Scribe: Zeyu Guo In the first lecture, we saw three equivalent variants of the classical PCP theorems in terms of CSP, proof checking,
More informationNondeterminism LECTURE Nondeterminism as a proof system. University of California, Los Angeles CS 289A Communication Complexity
University of California, Los Angeles CS 289A Communication Complexity Instructor: Alexander Sherstov Scribe: Matt Brown Date: January 25, 2012 LECTURE 5 Nondeterminism In this lecture, we introduce nondeterministic
More informationLecture 4. 1 Circuit Complexity. Notes on Complexity Theory: Fall 2005 Last updated: September, Jonathan Katz
Notes on Complexity Theory: Fall 2005 Last updated: September, 2005 Jonathan Katz Lecture 4 1 Circuit Complexity Circuits are directed, acyclic graphs where nodes are called gates and edges are called
More informationBasic Probabilistic Checking 3
CS294: Probabilistically Checkable and Interactive Proofs February 21, 2017 Basic Probabilistic Checking 3 Instructor: Alessandro Chiesa & Igor Shinkar Scribe: Izaak Meckler Today we prove the following
More informationSOLUTION: SOLUTION: SOLUTION:
Convert R and S into nondeterministic finite automata N1 and N2. Given a string s, if we know the states N1 and N2 may reach when s[1...i] has been read, we are able to derive the states N1 and N2 may
More informationLecture 7: More Arithmetic and Fun With Primes
IAS/PCMI Summer Session 2000 Clay Mathematics Undergraduate Program Advanced Course on Computational Complexity Lecture 7: More Arithmetic and Fun With Primes David Mix Barrington and Alexis Maciel July
More informationLecture 15: Expanders
CS 710: Complexity Theory 10/7/011 Lecture 15: Expanders Instructor: Dieter van Melkebeek Scribe: Li-Hsiang Kuo In the last lecture we introduced randomized computation in terms of machines that have access
More informationCS261: Problem Set #3
CS261: Problem Set #3 Due by 11:59 PM on Tuesday, February 23, 2016 Instructions: (1) Form a group of 1-3 students. You should turn in only one write-up for your entire group. (2) Submission instructions:
More informationHarvard CS 121 and CSCI E-207 Lecture 6: Regular Languages and Countability
Harvard CS 121 and CSCI E-207 Lecture 6: Regular Languages and Countability Salil Vadhan September 20, 2012 Reading: Sipser, 1.3 and The Diagonalization Method, pages 174 178 (from just before Definition
More information