Problem Set 2. Assigned: Mon. November. 23, 2015

Size: px
Start display at page:

Download "Problem Set 2. Assigned: Mon. November. 23, 2015"

Transcription

1 Pseudorandomness Prof. Salil Vadhan Problem Set 2 Assigned: Mon. November. 23, 2015 Chi-Ning Chou Index Problem Progress 1 SchwartzZippel lemma 1/1 2 Robustness of the model 1/1 3 Zero error versus 1-sided error 1/1 4 Polynomial Identity Testing for integer circuits 2/2 5 Polynomial Identity Testing via Modular Reduction 2/2 6 Primality Testing 2/2 7 Chernoff Bound 3/3 8 Necessity of Randomness for Identity Testing* 0/1 9 Spectral Graph Theory 6/6 10 Hitting Time and Eigenvalues for Directed Graphs 2/3 11 Consequences of Derandomizing prbpp 0/4 Total 20/26 Table 1: Problem Set 2: Progress Problem 1. (Schwartz-Zippel lemma.) over a field F and S F,then If f(x 1,..., x n ) is a nonzero polynomial of degree d Pr α 1,..., α n S [f(α 1,..., α n ) = 0] d S Prove by the induction on the number of the variables k. The induction hypothesis is Pr α 1,..., α k S [f(α 1,..., α k ) = 0] d S. For the base case k = 1, f reduce to an univariate polynomial which has at most d roots. Thus, the probability to take a root from set S is at most d S since S contains no more than d roots of f. Suppose the induction hypothesis stands for k n, consider k = n + 1. For any multivariate polynomial f with n + 1 variables and an n + 1th input α n+1. We can regard them as: f(,...,, α n+1 ) = f αn+1 (,..., ). Now, we can factorize f αn+1 with the degree of x n+1 in order to use Schwartz-Zippel lemma. Suppose the maximum degree of x n+1 in f is l, then we can write f αn+1 = l q i (x 1,..., x n )αn+1 i i=0 1

2 , where q i are polynomials in x 1,..., x n. Note that the degree of f αn+1 is l. Thus, by induction hypothesis, we have Pr α 1,..., α n S [f α n+1 (α 1,..., α n ) = 0] d l S And if f αn+1 (α 1,..., α n ) 0, since we take α n+1 arbitrary, we have To sum up, Pr α n+1 S [f α n+1 (α 1,..., α n ) = 0] = Pr α n+1 S [f(α 1,..., α n+1 ) = 0] l S Pr α 1,..., α n+1 S [f(α 1,..., α n+1 ) = 0] Pr α n+1 S Intuition (Schwartz-Zippel Lemma) ( + Pr α 1,..., α n S d l S Pr α1,..., α n S [f α n+1 (α 1,..., α n ) = 0] ( ) + l S = d S The statement is for ANY subset of the field or domain. Pr α n+1 S [f α n+1 (α 1,..., α n ) = 0] Utilize the structure of polynomial. For instance, the ratio of zeros can be upper bounded by the degreed. When d = 1, we can upper bound the exact number of zeros. When d > 1, the bound is for ratio. ) Problem 2. (Robustness of the model) Suppose we modify our model of randomized computation to allow the algorithm to obtain a random element of {1,..., m} for any number m whose binary representation it has already computed (as opposed to just allowing to random bits). Show that this would not change the classes BPP and RP. Suppose 2 k + 1 < m 2 k for some integer k. Thus, the binary representation of m will be b 1 b 2...b k+1, b i {0, 1}. Let X be the random sample from {1,..., m}, take X = X 1 and consider the binary representation of X : b 1 b 2...b k+1. If b 1, which is the most significant bit, equals to 1, then throw away this number and sample another random number again. Once b 1 = 0, return b 2,..., b k+1 as the random bits to the algorithm. Since we condition on the case that the random number X being less than 2 k, the distribution of X will be uniform in {0,..., 2 k } which means that each b i is uniform in {0, 1}. As a result, X provides k random bits. Note that for a single sample, the algorithm generates at most k random bits. Moreover, the probability to generate a valid random sequence per sample is 2k m 1 2. Namely, the expected random bits generated per sample is greater than k 2. Thus, using this randomization model 2

3 is even more efficient than the binary version. In conclusion, when implementing the algorithm in BPP or RP, allowing the randomized algorithm sample in {1, m} will let the computation problem stay in BPP or RP. Intuition (support size of random element) The number of samples needed from a randomized algorithm is inverse-linear in the support size of the random element. That is, Once m = o(2 n ), it won t affect the polynomial complexity classes. Problem 3. (Zero error versus 1-sided error) Prove that ZPP = RP co-rp. 1. (ZP P RP co-rp) Suppose language L ZPP, probabilistic algorithm A s.t. A decides L in polynomial time. Since A decides L with neither false negative error nor false positive error, L RP and L co-rp. Thus, L RP co-rp. 2. (ZP P RP co-rp) Suppose language L RP co-rp, probabilistic algorithms B and C s.t. B decides L in polynomial time with false negative error less than 1 3 and C decides L in polynomial time with false positive error less than 1 3. Construct a new probabilistic algorithm D as follow: (a) x {0, 1} n, if B(x) = 0, then output 0. If not, go to 2. (b) If C(x) = 1, then output 1. Otherwise, output 0. To check the correctness of algorithm D, if x L, B has some probability fails to recognize it. However, in the second step, C will successfully recognize x. If x / L, B will successfully recognize x. Also, while B and C are both probabilistic polynomial time algorithm, D is a probabilistic polynomial time algorithm that correctly decides L. Thus, L ZPP. Problem 4. (Polynomial Identity Testing for integer circuits) (1) Show that if N is a nonzero integer and M R {1,..., log 2 N}, then P r[n 0(modM)] = Ω(1/ log log N) (2) Show that Arithmetic Circuit Identity Testing over Z is in co-rp. (1) The idea to show this lower bound is simple: show that there must be a large amount of prime numbers in {1,..., log 2 N} such that they can not divide N by counting argument. Observe that we have the following facts: 3

4 Number of primes in {1,..., log 2 N} = Θ( log2 N log log 2 N ) = Θ( log2 N 2 log log N ). Number of prime divisors of N = O(log N) = o( log2 N 2 log log N ). As the order of the number of primes in {1,..., log 2 N} is asymptotically greater than the number of prime divisors of N, we can see that there must be a large portion of primes in {1,..., log 2 N}, say half of them. As a result, as N becomes large enough, we have at least Θ( log2 N 4 log log N ) numbers in {1,..., log2 N} that cannot divide N. This completes the lower bound: P r[n 0(mod M)] = Ω(1/ log log N), where M R {1,..., log 2 N}. (2) The main difficulty here is that the arithmetic circuit is over Z, which means that as there are polynomially many operations, the value of the output might becomes exponentially large and takes exponential time to compute. To be clear, a single arithmetic operation in arithmetic circuit takes O(log A) time, where A is the largest number in the whole evaluation. Moreover, if the circuit size is m = poly(n), where n is the input size, the output might be as large as n 2m. As a result, we may want to modulo the result of each operation in order to bound single operation time. However, after using modulo to save time, we can see that some nonzero results might be detected as a zero after modulation once the divisor we chose is not good enough. As a result, we also need to show that this false alarm probability can be controlled. Consider the following scenario: There are n input and the size of circuit is m = poly(n), thus the maximum degree of the function is less than 2 m. Randomly choose each input from {1,..., 3 2 m }, which consumes O(nm) random bits. By Schwartz-Zippel lemma the probability of false alarm is less than 2m 3 2 m = 1 3. The largest value of output might be N = O((3 2 m ) 2m ). To make sure the value in the whole evaluation process is small, we do modulation after every operation. The divisor is randomly chosen from {1,..., log 2 N}. We can compute the probability of false alarm by the result in part (1), which shows that it is less than 1 O(1 log log N ) = O(1 1 m log m ) = 1 O(e m log m ). If we do the whole process m 2 times, the false alarm probability will be upper bounded by O(e m log m ), which is negligible in n. To sum up, we can see that all of the steps in the above process take polynomial time, which means that is is indeed a probabilistic polynomial time algorithm. Also, by Schwartz-Zippel lemma and the argument in the last two steps, we can upper bound the false alarm probability. As miss detection cannot happen, we reach a co-rp algorithm. Thus, Arithmetic Circuit Identity Testing over Z is in co-rp. 4

5 Problem 5. (Polynomial Identity Testing via Modular Reduction) (1) Let f(x) be a univariate polynomial of degree D over a field F. Prove that there are constants c, c such that if f(x) is nonzero and g(x) is a randomly selected polynomial with degree at most d = c log D, then the probability that f(x) mod g(x) is nonzero is at least 1/c log D. Deduce a randomized polynomial-time identity testing for univariate polynomial in the above form. (2) Obtain a randomized polynomial-time identity test for multivariate polynomial by giving a (deterministic) reduction to the univariate case. (1) Assume the size of field F is p, then we know that The number of irreducible factors for f(x) with degree d is D d. The number of irreducible polynomials with degree at most d is at least pd+1 2d. The number of polynomials with degree at most d is p d d i=1 pi p d+1. Thus, we can infer that The probability for g(x) to have degree d is 1 p. The probability for g(x) to be irreducible given g(x) has degree d is at least 1 2pd. The probability for g(x) to be factor of f(x) given g(x) has degree d is at most 1 d. Finally, we have P[f(x) 0(mod g(x))] P[f(x) 0(mod g(x)), deg(g) = d, g is irreducible] = P[deg(g) = d] P[g is irreducible deg(g) = d] P[f(x) 0(mod g(x)) deg(g) = d, g is irreducible] 1 p 1 2pd (1 1 d ) = Ω(1 d ) 1 = Ω( log D ) That is, P[f(x) 0(mod g(x))] = 1 c log D for some constant c. To achieve a randomized identity testing with false alarm error less than ɛ, we can run the above process k = O(log D log 1 ɛ ) times. Note that each round we use O(log p log D) random bits. Thus, the total randomness required to achieve ɛ error is O(log p log 2 D log 1 ɛ ). (2) Before stating our reduction, let s first have some observation about multivariate polynomial. A multivariate polynomial f(x 1,..., x k ) with k variables can be written as a polynomial in one of its variable x i. Suppose the largest degree of x i in f is d i, then we have d i f(x 1,..., x k ) = C il (x 1,..., x i 1, x i+1,..., x k )x l i l=0 5

6 Note that C il is also a polynomial. If there s a coefficient of f is nonzero, then f must be nonzero. With these two observations, our goal now is to present a reduction from multivariate polynomial f to a univariate polynomial g such that f is nonzero iff g is nonzero. The difficulty here is that the domain of f is much larger than the domain of g, so there might be the case where f is nonzero but g is zero. As a result, we will utilize the second observation to mapped nonzero terms in f to coefficient of g uniquely so that the nonzeroness will still hold. g 0 (x 1,..., x k ) = f(x 1,..., x k ) g 1 (y, x 2,..., x k ) = g 0 (y, y e 1, x 2,..., x k ) g 2 (y, x 3,..., x k ) = g 1 (y, y e 2, x 3,..., x k ). g k (y) = g k 1 (y, y e k ) That is, we replace the assignment of each variable x i with y e i for some e i. Clearly that, for any assignment e 1,..., e k, the zeroness will pass down from g 0 to g k. Thus, our goal here is to find a certain assignment e 1,..., e k such that the nonzeroness can also pass down from g 0 to g k. This can be done with induction. First, assume g i (y, x i+1,..., x k ) is nonzero for some 0 i < k, we have d i+1 g i+1 (y, x i+1,..., x k ) = C l (y, x i+2,..., x k )x l i+1 l=0 = g i (y, y e i, x i+1,..., x k ) d i = C il (y, x i+1,..., x k )y e il l=0 Now, since we assume g i (y, x i+1,..., x k ) is nonzero, we can take the largest l such that C il (y, x i+1,..., x k ) is nonzero. If l = 0, then we have g i+1 (y, x i+1,..., x k ) is also nonzero. If l 0, then we have a nonzero coefficient for y e il. As 0 < l d i, if we deliberately let the possible exponent of each x i be disjoint, then we can make sure the nonzeroness will be propagated from g i to g i+1, and thus if g 0 = f is nonzero, then g k is also nonzero. We have a reduction from multivariate polynomial to univariate polynomial for a polynomial identity testing. Explicit construction for the reduction Take e 1 = 1 and e i = i 1 j=1 (d j + 1) for i = 1,..., k, where d i is the largest degree of x i in f. We can verify that for any i = 1,..., k, e i d i < e i+1 = e i (d i + 1). Thus, the above reduction is working. 6

7 Problem 6. (Primality Testing) (1) Show that for every positive integer n, the polynomial identity (x + 1) n x n + 1 (mod n) holds iff n is prime. (2) Obtain a co-rp algorithm for the language Primality Testing = {n : n is prime} using part 1 together with Problem 2.5. (1) ( ) By Fermat s little law, we have (x + 1) n (x + 1) (mod n) when n is a prime. Also x x n (mod n) when n is a prime. Combine them together we have (x + 1) n x n + 1 (mod n) if n is a prime. ( ) Let P n (x) = (x + 1) n (x n + 1) = (x + 1) n x n. Note that P n (x) = n 1 ( n ) i=1 i x i. Assume P n (x) 0 (mod n). If n is a composite, consider p to be one of n s prime factor and p a is the largest power of p that divides n. Now, observe the pth coefficient of P n (x). As ( ) n p = n (n p+1) p! cannot be divided by p a, clearly that ( n p) 0 (mod n). Thus, we know that P n (x) is not an identity polynomial in Z n and there exists x Z n such that (x + 1) n x n + 1 (mod n). (2) The following is a co-rp algorithm for Primality Testing. Algorithm 1: Primality Testing Input: n Z + Output: If n is prime, output Yes; Otherwise, No. 1 if n is divided by small prime then 2 output No 3 Let P n (x) = (x + 1) n x n ; 4 Randomly pick a polynomial g(x) in Z n with degree less than log n ; 5 if P n (x) is divided by g(x) then 6 output Yes 7 else 8 output No We show the correctness of the algorithm in two steps. (Completeness) If n is a prime, then P n (x) is identically zero. Thus, every polynomial g(x) can divide P n (x) and thus the algorithm will output Yes. (Soundness) If n is a composite, we claim that there s non-negligible probability for the algorithm to output No. Thus, we can repeating the algorithm to decrease the false alarm error and achieve a co-rp algorithm. To show that our claim for the soundness of the algorithm is correct, we show the following two propositions: Suppose n is a composite with a prime factor p, then (a) P n (x) is nonzero in Z n. Moreover, P n (x) is nonzero in Z p. 7

8 (b) For a polynomial g(x). If g(x) divides P n (x) in Z, then g(x) also divides P n (x) in Z p. (a) is obviously true as we know that if P n (x) is zero in Z p, then P n (x) is also zero in Z n. As to (b), the same reasoning holds. With these two propositions, we can now think of as doing polynomial identity testing via modulo reduction in Z p as by (b) the correct probability in Z n is no less than that in Z p. Finally, from Problem 2.5 we know that the probability of g(x) not dividing P n (x) is at least 1 c log n. Thus, after repeating the algorithm for k = O(log n log 1 ɛ ) times, we can achieve a co-rp algorithm with false alarm probability less than ɛ. Problem 7. (Chernoff Bound) and X = t i=1 X i. Let X 1,..., X t be independent [0, 1]-valued random variables, (1) Show that for every r [0, 1/2], E[e rx ] e re[x]+r2t. (2) Deduce the Chernoff Bound of Theorem 2.21: Pr[X E[X]+ɛt] e ɛ2 t/4 and Pr[X E[X] ɛt] e ɛ2 t/4. (3) Where did you use the independence of the X i s? (1) Consider X i E[e rx i ] = e rx i df Xi (x i ) x i 1 + rx i + r 2 x 2 i df Xi (x i ) x i 1 + re[x i ] + r 2 e re[x i]+r 2 Combine all together, E[e rx ] = t E[e rx i ] i=1 = e re[x]+r2 t (2) First, observe the moment generating function of X. E[e sx ] = t 0 e sx f(x)dx t a t i=1 e re[x i]+r 2 P [X a] E[esX ] e sa, where f is the pdf of X. Now take a = E[X] + ɛt, we get e sx f(x)dx e sa P [X a] P [X E[X] + ɛt] E[esX ] e s(e[x]+ɛt) 8

9 Take s = ɛ 2, by Item 1, we get On the other direction, P [X E[X] + ɛt] e ɛ 2 E[X]+ ɛ 2 4 t e ɛ 2 (E[X]+ɛt) = e ɛ 2 ( ɛt 2 ɛt) = e ɛ2 t 4 (1) E[e sx ] = t 0 e sx f(x)dx a P [X a] E[e sx ] e sa Take a = E[X] ɛt and s = ɛ 2, by Item 1, we get P [X E[X] ɛt] E[e ɛ 2 X ] e ɛ 2 0 e sx f(x)dx e sa P [X a] ɛ e (E[X] ɛt) e ɛ 2 2 E[X] ɛ2 4 t ɛ 2 e 4 (2) (E[X] ɛt) (3) In Item 1, I use the independence of X i s to divide E[e rx ] into i=1 te[erx i ]. Give an upper bound for each E[e rx i ] respectively, then integrate and get the final upper bound for E[e rx ]. Problem 8. Necessity of Randomness for Identity Testing Problem 9. Spectral Graph Theory (1) All eigenvalues of M have absolute value a most 1. (2) G is connected 1 is an eigenvalue of multiplicity at least 2. (3) Suppose G is connected. Then G is bipartite -1 is an eigenvalue of M. (4) G is connected all eigenvalues of M other than λ 1 are at most 1 1/poly(n, d). (5) G connected and bipartite all eigenvalues of M other that λ 1 have absolute value at most 1 1/poly(n, d) and thus Γ(G) 1/poly(n, d). (1) Let x be its eigenvector, and assume the ith entry of x has the largest absolute value. Consider the ith row of Mx, denote the ith row of M as M i, we have (Mx) i = M i x = = x i n M ij x j j=1 n M ij x j x i j=1 n M ij As a result, we can see that absolute value of the eigenvalue of M can not exceed 1. (2) Here, we show the two directions: j=1 9

10 ( ) Suppose G is disconnected, then M can be written as ( ) M1 0 M = 0 M 2, where M 1 and M 2 are two disjoint vertices set. Suppose M 1 is n 1 n 1 and M 2 is n 2 n 2, we have two linearly independent eigenvectors with eigenvalue 1., where v 1 has n 1 1 and v 2 has n 2 1. v 1 = (1, 1,, 1, 0,, 0) T v 2 = (0, 0,, 0, 1,, 1) T ( ) Observe that we can represent an eigenvalue in another way. λ = xt Mx x T x = i,j M ijx i x j = i,j M ij 1 2 [x2 i + x2 j (x i x j ) 2 ] (a) = (i,j) E 2(x i x j ) 2 = 1 i,j M ij(x i x j ) 2 = (i,j) E (x i x j ) 2 The 2 factor in (a) is because we are using undirected graph, each edge will appear in the summations two times. Now, suppose 1 is an eigenvalue of multiplicity more than 2, then there exist an eigenvector x that are orthogonal to (1, 1,, 1) T and has eigenvalue 1, which means that i x i = 0. Consider the above representation, to have eigenvalue 1, x must satisfy (x i x j ) 2 = 0 (i,j) E Since i x i = 0 and x cannot be zero vector, we know that x has both positive and negative entries. However, the above constraint told us that once there an edge between two vertices, their corresponding value in x must be the same. From these two messages, we can infer that there must be at least two disjoint vertices sets in G which can be found by categorizing the vertices w.r.t. the sign of the corresponding x value. (3) Here, we show the two directions: ( ) Suppose G is bipartite, then M can be written as ( 0 ) M1 M 2 0, where M 1 and M 2 are of size n 2 n 2. (Note that here n must be even) Immediately, we have an eigenvector with eigenvalue -1 as follow, where the number of 1 and -1 are n 2. v = (1, 1,, 1, 1, 1) T 10

11 ( ) Observe that an eigenvalue can be written into another representation as follow. λ = xt Mx x T x = i,j M ijx i x j = i,j M ij 1 2 [(x i + x j ) 2 x 2 i x2 j ] = 1 + (i,j) E (x i + x j ) 2 = i,j M ij(x i + x j ) 2 Suppose there is an eigenvalue with value -1, then according to the above new representation, the corresponding eigenvector, say x, will satisfy (x i + x j ) 2 = 0 (i,j) E Namely, when there s an edge between two vertices, their corresponding value in x must be both 0 or having the opposite signs. However, there must be some entries of x not being 0 since x is a non-zero vector. Thus, we can start from a non-zero entry and observe that all of its neighbors have the opposite sign to it. Moreover, as we assume G is connected, every vertex will have non-zero value. Finally, once we categorize the vertices with their signs, we can see that those who have the same sign cannot connect to each other, which infers that G is a bipartite graph. (4) Note that for eigenvalues other than λ, their corresponding unit eigenvectors x 2,..., x n should satisfy j x ij = 0 and j x ij 2 = 1. Consider arbitrary x i, we can immediately see that there must be an entry of x i has absolute value greater than 1/ n, say x ik1. And another entry has an opposite sign, say x ik2. That is, their difference is greater than 1/ n: x ik1 x ik2 1/ n. Since we know G is connected, there must be a path from x ik1 to x ik2 whose length is no greater than n. As a result, there must be an edge in this path that have two vertices difference greater than 1/ n n by the pigeonhole principle. As we can write λ 2 = 1 1 d (i,j) E(G) We have λ d ( 1/ n n )2 = 1 1/poly(n, d). (x i x j ) 2 (5) First, let s recall a basic fact in graph theory. If a graph with transition matrix M is connected, then the graph with transition matrix M 2 is connected iff M is nonbipartite. From this basic fact and part (4) in this problem, we can infer that the second largest absolute eigenvalue of M 2 is at most 1 1poly(n, d). That is, λ n 1 1/poly(n, d). (6) From our proof in part (4), we can see that we use the maximal length of path in the graph to lower bound the difference of two vertices. Now, assume the diameter of the graph is D, then we can sharpen the bound to 1 1/Ω(d n D 2 ). To further improve the bound, we can apply Cauchy-Schwartz. Consider the path we mentioned in (4). Denote the corresponding 11

12 vertices as x 1, x 2,..., x D+1. Let y i = x i+1 x i for i = 1,..., D denote the difference of two consecutive vertices. Now, we have one constraint: D i=1 y i 1 n. Apply Cauchy-Schwartz, D D D ( yi 2 ) ( )1 2 ( y i ) 2 1/n i=1 i=1 We have, D i=1 y2 i 1/(n D). Go back to the definition of the second largest eigenvalue, we have max{ λ 2, λ n } 1 1/Ω(d n D). Namely, Γ(G) Ω(d n D). Problem 10. Hitting Time and Eigenvalues for Directed Graph (1) Show that for every n there is a digraph G with n vertices, outdegree 2, and hit(g) = 2 Ω(n). (2) Let G be a regular digraph with random-walk matrix M. Show that λ(g) = λ(g ), where G is the undirected graph whose random-walk matrix is MM T. (1) For every n, construct the digraph G n as follow: V (G n ) = {1, 2,..., n}. E(G n ) = ( n 1 i=1 (i, i + 1)) ( n i=1 (i, 1)). That is, every vertex i has one edge connecting to the next vertex i + 1 (except vertex n) and another edge connecting to the first vertex 1. For each vertex with outdegree 2, the probability for them to moving forward and returning back to the first vertex is both 1 2. Now, we want to find an exponential lower bound for hit time of G n. Instead of doing so directly, we can upper bound the probability of going from one vertex to another within 2 Ω(n) steps. Concretely, we will upper bound the probability of going from vertex 1 to vertex n within 2 n 3 steps. Assume n > 3. i=1 P [going from vertex 1 to vertex n within 2 n 3 steps] = = = 2 n 3 k=n 1 2 n 3 k=n 1 2 n 3 k=n 1 2 n 3 k=n 1 P [going from vertex 1 to vertex n with exactly k steps] P [goint to vertex 1 after k n + 1 steps and go to vertex n afterward] P [goint to vertex 1 after k n + 1 steps] P [go to vertex n from vertex 1 in n 1 steps] P [go to vertex n from vertex 1 in n 1 steps] = 2 n 1 2 n 3 = 1 4 As a result, hit(g n ) 2 n 3 and thus hit(g n ) = 2 Ω(n). 2 n 2 k=n 1 2 n 1 12

13 (2) From Definition 2.50, we know that λ(g) = λ(g ) = xm max i x i=0 x ymm T max i y i=0 y xmm T x T = max i x i=0 xx T ymm T MM T y T = max i y i=0 yy T Note that MM T is a positive semidefinite matrix, namely, we can decompose it into MM T = UΣU T, where U is an unitary matrix and Σ is a diagonal matrix. As we restrict x = y = 1, let ˆx = arg max x =1, i x i=0 xmmt x T = arg max x =1, xuσu T x T i x i=0 ŷ = arg max y =1, i y i=0 ymmt MM T y T = arg max y =1, i y i=0 yuσ2 U T y T Since MM T and MM T MM T are both positive semidefinite matrices, the maximizers ˆx and ŷ are eigenvectors of them respectively. Consider Thus, we have λ(g) = λ(g ). ˆxMM T MM T ˆx = λ(g)ˆxmm T ˆx = λ 2 (G) λ(g ) ŷmm T ŷ = λ(g ) λ(g) Problem 11. Consequences of Derandomizing prbpp (1) Suppose there s a PPT algorithm A for approximating function f : {0, 1} N with multiplicative error ɛ : N R 0 such that P r[a(x) f(x) (1 + ɛ( x ))A(x)] for all x. Show that if prbpp = prp, there is a deterministic polynomial time algorithm B such that B(x) f(x) (1 + ɛ( x ))B(x) for all x. (1) We show this with three steps: (a) Reduce a BPP problem Π prbp P to the PPT algorithm A. (b) As we assume prbpp = prp, we have a deterministic version polynomial time decision problem Π prp. (c) Using binary search to construct the desired deterministic polynomial time algorithm B. The problem Π BP P is actually quite simple, the inputs are (f, x, N). And the yes and no instances are defined as below: Π prbp P Y = {f(x) N} Π prbp P N = {f(x) < N} 13

14 Similarly, the problem Π prp is also has inputs (f, x, N) and the yes instance Π prp Y = {f(x) N} and the no instance Π prp N = {f(x) < N}. With ΠprP, it s clearly that we can construct a polynomial time deterministic algorithm B such that B(x) f(x) (1 + ɛ( x ))B(x) for all x. As a result, here we only need to argue about the reduction in step (a). And we show this simply by the following algorithm: On inputs (f, x, N). Run A(x) in polynomial time. If A(x) N, output Yes. If A(x) N/(1 + ɛ( x )), output No. If N/(1 + ɛ( x )) < A(x) < N, output Yes and No with probability 1/2 respectively. Now, we check the completeness and soundness of this prbpp algorithm. (Completeness) If (f, x, N) Π prbp P Y, then f(x) N. Consider (Soundness) P [C(f, x, N) = P [A(x) N]] + 1 P [N > A(x) > N/(1 + ɛ)] P [A(x) N/(1 + ɛ)]

1 Randomized Computation

1 Randomized Computation CS 6743 Lecture 17 1 Fall 2007 1 Randomized Computation Why is randomness useful? Imagine you have a stack of bank notes, with very few counterfeit ones. You want to choose a genuine bank note to pay at

More information

CSC 5170: Theory of Computational Complexity Lecture 5 The Chinese University of Hong Kong 8 February 2010

CSC 5170: Theory of Computational Complexity Lecture 5 The Chinese University of Hong Kong 8 February 2010 CSC 5170: Theory of Computational Complexity Lecture 5 The Chinese University of Hong Kong 8 February 2010 So far our notion of realistic computation has been completely deterministic: The Turing Machine

More information

Pr[X = s Y = t] = Pr[X = s] Pr[Y = t]

Pr[X = s Y = t] = Pr[X = s] Pr[Y = t] Homework 4 By: John Steinberger Problem 1. Recall that a real n n matrix A is positive semidefinite if A is symmetric and x T Ax 0 for all x R n. Assume A is a real n n matrix. Show TFAE 1 : (a) A is positive

More information

Lecture Examples of problems which have randomized algorithms

Lecture Examples of problems which have randomized algorithms 6.841 Advanced Complexity Theory March 9, 2009 Lecture 10 Lecturer: Madhu Sudan Scribe: Asilata Bapat Meeting to talk about final projects on Wednesday, 11 March 2009, from 5pm to 7pm. Location: TBA. Includes

More information

Lecture 7: Schwartz-Zippel Lemma, Perfect Matching. 1.1 Polynomial Identity Testing and Schwartz-Zippel Lemma

Lecture 7: Schwartz-Zippel Lemma, Perfect Matching. 1.1 Polynomial Identity Testing and Schwartz-Zippel Lemma CSE 521: Design and Analysis of Algorithms I Winter 2017 Lecture 7: Schwartz-Zippel Lemma, Perfect Matching Lecturer: Shayan Oveis Gharan 01/30/2017 Scribe: Philip Cho Disclaimer: These notes have not

More information

20.1 2SAT. CS125 Lecture 20 Fall 2016

20.1 2SAT. CS125 Lecture 20 Fall 2016 CS125 Lecture 20 Fall 2016 20.1 2SAT We show yet another possible way to solve the 2SAT problem. Recall that the input to 2SAT is a logical expression that is the conunction (AND) of a set of clauses,

More information

1 Adjacency matrix and eigenvalues

1 Adjacency matrix and eigenvalues CSC 5170: Theory of Computational Complexity Lecture 7 The Chinese University of Hong Kong 1 March 2010 Our objective of study today is the random walk algorithm for deciding if two vertices in an undirected

More information

Lecture 2: January 18

Lecture 2: January 18 CS271 Randomness & Computation Spring 2018 Instructor: Alistair Sinclair Lecture 2: January 18 Disclaimer: These notes have not been subjected to the usual scrutiny accorded to formal publications. They

More information

1 Randomized complexity

1 Randomized complexity 80240233: Complexity of Computation Lecture 6 ITCS, Tsinghua Univesity, Fall 2007 23 October 2007 Instructor: Elad Verbin Notes by: Zhang Zhiqiang and Yu Wei 1 Randomized complexity So far our notion of

More information

compare to comparison and pointer based sorting, binary trees

compare to comparison and pointer based sorting, binary trees Admin Hashing Dictionaries Model Operations. makeset, insert, delete, find keys are integers in M = {1,..., m} (so assume machine word size, or unit time, is log m) can store in array of size M using power:

More information

6.842 Randomness and Computation Lecture 5

6.842 Randomness and Computation Lecture 5 6.842 Randomness and Computation 2012-02-22 Lecture 5 Lecturer: Ronitt Rubinfeld Scribe: Michael Forbes 1 Overview Today we will define the notion of a pairwise independent hash function, and discuss its

More information

CPSC 536N: Randomized Algorithms Term 2. Lecture 9

CPSC 536N: Randomized Algorithms Term 2. Lecture 9 CPSC 536N: Randomized Algorithms 2011-12 Term 2 Prof. Nick Harvey Lecture 9 University of British Columbia 1 Polynomial Identity Testing In the first lecture we discussed the problem of testing equality

More information

ACO Comprehensive Exam March 17 and 18, Computability, Complexity and Algorithms

ACO Comprehensive Exam March 17 and 18, Computability, Complexity and Algorithms 1. Computability, Complexity and Algorithms (a) Let G(V, E) be an undirected unweighted graph. Let C V be a vertex cover of G. Argue that V \ C is an independent set of G. (b) Minimum cardinality vertex

More information

A polytime proof of correctness of the Rabin-Miller algorithm from Fermat s Little Theorem

A polytime proof of correctness of the Rabin-Miller algorithm from Fermat s Little Theorem A polytime proof of correctness of the Rabin-Miller algorithm from Fermat s Little Theorem Grzegorz Herman and Michael Soltys November 24, 2008 Abstract Although a deterministic polytime algorithm for

More information

CSC 5170: Theory of Computational Complexity Lecture 9 The Chinese University of Hong Kong 15 March 2010

CSC 5170: Theory of Computational Complexity Lecture 9 The Chinese University of Hong Kong 15 March 2010 CSC 5170: Theory of Computational Complexity Lecture 9 The Chinese University of Hong Kong 15 March 2010 We now embark on a study of computational classes that are more general than NP. As these classes

More information

Out-colourings of Digraphs

Out-colourings of Digraphs Out-colourings of Digraphs N. Alon J. Bang-Jensen S. Bessy July 13, 2017 Abstract We study vertex colourings of digraphs so that no out-neighbourhood is monochromatic and call such a colouring an out-colouring.

More information

A Polynomial-Time Algorithm for Pliable Index Coding

A Polynomial-Time Algorithm for Pliable Index Coding 1 A Polynomial-Time Algorithm for Pliable Index Coding Linqi Song and Christina Fragouli arxiv:1610.06845v [cs.it] 9 Aug 017 Abstract In pliable index coding, we consider a server with m messages and n

More information

ACO Comprehensive Exam March 20 and 21, Computability, Complexity and Algorithms

ACO Comprehensive Exam March 20 and 21, Computability, Complexity and Algorithms 1. Computability, Complexity and Algorithms Part a: You are given a graph G = (V,E) with edge weights w(e) > 0 for e E. You are also given a minimum cost spanning tree (MST) T. For one particular edge

More information

CS 6820 Fall 2014 Lectures, October 3-20, 2014

CS 6820 Fall 2014 Lectures, October 3-20, 2014 Analysis of Algorithms Linear Programming Notes CS 6820 Fall 2014 Lectures, October 3-20, 2014 1 Linear programming The linear programming (LP) problem is the following optimization problem. We are given

More information

Lecture 6: Random Walks versus Independent Sampling

Lecture 6: Random Walks versus Independent Sampling Spectral Graph Theory and Applications WS 011/01 Lecture 6: Random Walks versus Independent Sampling Lecturer: Thomas Sauerwald & He Sun For many problems it is necessary to draw samples from some distribution

More information

CS 151 Complexity Theory Spring Solution Set 5

CS 151 Complexity Theory Spring Solution Set 5 CS 151 Complexity Theory Spring 2017 Solution Set 5 Posted: May 17 Chris Umans 1. We are given a Boolean circuit C on n variables x 1, x 2,..., x n with m, and gates. Our 3-CNF formula will have m auxiliary

More information

Lecture 5: Random Walks and Markov Chain

Lecture 5: Random Walks and Markov Chain Spectral Graph Theory and Applications WS 20/202 Lecture 5: Random Walks and Markov Chain Lecturer: Thomas Sauerwald & He Sun Introduction to Markov Chains Definition 5.. A sequence of random variables

More information

Chapter 7 Randomization Algorithm Theory WS 2017/18 Fabian Kuhn

Chapter 7 Randomization Algorithm Theory WS 2017/18 Fabian Kuhn Chapter 7 Randomization Algorithm Theory WS 2017/18 Fabian Kuhn Randomization Randomized Algorithm: An algorithm that uses (or can use) random coin flips in order to make decisions We will see: randomization

More information

15-251: Great Theoretical Ideas In Computer Science Recitation 9 : Randomized Algorithms and Communication Complexity Solutions

15-251: Great Theoretical Ideas In Computer Science Recitation 9 : Randomized Algorithms and Communication Complexity Solutions 15-251: Great Theoretical Ideas In Computer Science Recitation 9 : Randomized Algorithms and Communication Complexity Solutions Definitions We say a (deterministic) protocol P computes f if (x, y) {0,

More information

U.C. Berkeley CS174: Randomized Algorithms Lecture Note 12 Professor Luca Trevisan April 29, 2003

U.C. Berkeley CS174: Randomized Algorithms Lecture Note 12 Professor Luca Trevisan April 29, 2003 U.C. Berkeley CS174: Randomized Algorithms Lecture Note 12 Professor Luca Trevisan April 29, 2003 Fingerprints, Polynomials, and Pattern Matching 1 Example 1: Checking Matrix Multiplication Notes by Alistair

More information

Fast Polynomial Multiplication

Fast Polynomial Multiplication Fast Polynomial Multiplication Marc Moreno Maza CS 9652, October 4, 2017 Plan Primitive roots of unity The discrete Fourier transform Convolution of polynomials The fast Fourier transform Fast convolution

More information

Randomized Complexity Classes; RP

Randomized Complexity Classes; RP Randomized Complexity Classes; RP Let N be a polynomial-time precise NTM that runs in time p(n) and has 2 nondeterministic choices at each step. N is a polynomial Monte Carlo Turing machine for a language

More information

Lecture 20: Goemans-Williamson MAXCUT Approximation Algorithm. 2 Goemans-Williamson Approximation Algorithm for MAXCUT

Lecture 20: Goemans-Williamson MAXCUT Approximation Algorithm. 2 Goemans-Williamson Approximation Algorithm for MAXCUT CS 80: Introduction to Complexity Theory 0/03/03 Lecture 20: Goemans-Williamson MAXCUT Approximation Algorithm Instructor: Jin-Yi Cai Scribe: Christopher Hudzik, Sarah Knoop Overview First, we outline

More information

1 Reductions from Maximum Matchings to Perfect Matchings

1 Reductions from Maximum Matchings to Perfect Matchings 15-451/651: Design & Analysis of Algorithms November 21, 2013 Lecture #26 last changed: December 3, 2013 When we first studied network flows, we saw how to find maximum cardinality matchings in bipartite

More information

Chapter 5 : Randomized computation

Chapter 5 : Randomized computation Dr. Abhijit Das, Chapter 5 : Randomized computation Probabilistic or randomized computation often provides practical methods to solve certain computational problems. In order to define probabilistic complexity

More information

Lecture 15: Expanders

Lecture 15: Expanders CS 710: Complexity Theory 10/7/011 Lecture 15: Expanders Instructor: Dieter van Melkebeek Scribe: Li-Hsiang Kuo In the last lecture we introduced randomized computation in terms of machines that have access

More information

Lecture 24: Randomized Complexity, Course Summary

Lecture 24: Randomized Complexity, Course Summary 6.045 Lecture 24: Randomized Complexity, Course Summary 1 1/4 1/16 1/4 1/4 1/32 1/16 1/32 Probabilistic TMs 1/16 A probabilistic TM M is a nondeterministic TM where: Each nondeterministic step is called

More information

Complexity Theory VU , SS The Polynomial Hierarchy. Reinhard Pichler

Complexity Theory VU , SS The Polynomial Hierarchy. Reinhard Pichler Complexity Theory Complexity Theory VU 181.142, SS 2018 6. The Polynomial Hierarchy Reinhard Pichler Institut für Informationssysteme Arbeitsbereich DBAI Technische Universität Wien 15 May, 2018 Reinhard

More information

Outline. Complexity Theory EXACT TSP. The Class DP. Definition. Problem EXACT TSP. Complexity of EXACT TSP. Proposition VU 181.

Outline. Complexity Theory EXACT TSP. The Class DP. Definition. Problem EXACT TSP. Complexity of EXACT TSP. Proposition VU 181. Complexity Theory Complexity Theory Outline Complexity Theory VU 181.142, SS 2018 6. The Polynomial Hierarchy Reinhard Pichler Institut für Informationssysteme Arbeitsbereich DBAI Technische Universität

More information

Randomness and Computation March 13, Lecture 3

Randomness and Computation March 13, Lecture 3 0368.4163 Randomness and Computation March 13, 2009 Lecture 3 Lecturer: Ronitt Rubinfeld Scribe: Roza Pogalnikova and Yaron Orenstein Announcements Homework 1 is released, due 25/03. Lecture Plan 1. Do

More information

18.10 Addendum: Arbitrary number of pigeons

18.10 Addendum: Arbitrary number of pigeons 18 Resolution 18. Addendum: Arbitrary number of pigeons Razborov s idea is to use a more subtle concept of width of clauses, tailor made for this particular CNF formula. Theorem 18.22 For every m n + 1,

More information

CS6999 Probabilistic Methods in Integer Programming Randomized Rounding Andrew D. Smith April 2003

CS6999 Probabilistic Methods in Integer Programming Randomized Rounding Andrew D. Smith April 2003 CS6999 Probabilistic Methods in Integer Programming Randomized Rounding April 2003 Overview 2 Background Randomized Rounding Handling Feasibility Derandomization Advanced Techniques Integer Programming

More information

Chapter 6 Randomization Algorithm Theory WS 2012/13 Fabian Kuhn

Chapter 6 Randomization Algorithm Theory WS 2012/13 Fabian Kuhn Chapter 6 Randomization Algorithm Theory WS 2012/13 Fabian Kuhn Randomization Randomized Algorithm: An algorithm that uses (or can use) random coin flips in order to make decisions We will see: randomization

More information

Lecture 5: Two-point Sampling

Lecture 5: Two-point Sampling Randomized Algorithms Lecture 5: Two-point Sampling Sotiris Nikoletseas Professor CEID - ETY Course 2017-2018 Sotiris Nikoletseas, Professor Randomized Algorithms - Lecture 5 1 / 26 Overview A. Pairwise

More information

IITM-CS6845: Theory Toolkit February 3, 2012

IITM-CS6845: Theory Toolkit February 3, 2012 IITM-CS6845: Theory Toolkit February 3, 2012 Lecture 4 : Derandomizing the logspace algorithm for s-t connectivity Lecturer: N S Narayanaswamy Scribe: Mrinal Kumar Lecture Plan:In this lecture, we will

More information

fsat We next show that if sat P, then fsat has a polynomial-time algorithm. c 2010 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 425

fsat We next show that if sat P, then fsat has a polynomial-time algorithm. c 2010 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 425 fsat fsat is this function problem: Let φ(x 1, x 2,..., x n ) be a boolean expression. If φ is satisfiable, then return a satisfying truth assignment. Otherwise, return no. We next show that if sat P,

More information

Error Correcting Codes Questions Pool

Error Correcting Codes Questions Pool Error Correcting Codes Questions Pool Amnon Ta-Shma and Dean Doron January 3, 018 General guidelines The questions fall into several categories: (Know). (Mandatory). (Bonus). Make sure you know how to

More information

CS Communication Complexity: Applications and New Directions

CS Communication Complexity: Applications and New Directions CS 2429 - Communication Complexity: Applications and New Directions Lecturer: Toniann Pitassi 1 Introduction In this course we will define the basic two-party model of communication, as introduced in the

More information

Lecture 22: Derandomization Implies Circuit Lower Bounds

Lecture 22: Derandomization Implies Circuit Lower Bounds Advanced Complexity Theory Spring 2016 Lecture 22: Derandomization Implies Circuit Lower Bounds Prof. Dana Moshkovitz 1 Overview In the last lecture we saw a proof presented by Madhu Sudan that E SIZE(2

More information

Theory of Computer Science to Msc Students, Spring Lecture 2

Theory of Computer Science to Msc Students, Spring Lecture 2 Theory of Computer Science to Msc Students, Spring 2007 Lecture 2 Lecturer: Dorit Aharonov Scribe: Bar Shalem and Amitai Gilad Revised: Shahar Dobzinski, March 2007 1 BPP and NP The theory of computer

More information

(x 1 +x 2 )(x 1 x 2 )+(x 2 +x 3 )(x 2 x 3 )+(x 3 +x 1 )(x 3 x 1 ).

(x 1 +x 2 )(x 1 x 2 )+(x 2 +x 3 )(x 2 x 3 )+(x 3 +x 1 )(x 3 x 1 ). CMPSCI611: Verifying Polynomial Identities Lecture 13 Here is a problem that has a polynomial-time randomized solution, but so far no poly-time deterministic solution. Let F be any field and let Q(x 1,...,

More information

10.4 The Kruskal Katona theorem

10.4 The Kruskal Katona theorem 104 The Krusal Katona theorem 141 Example 1013 (Maximum weight traveling salesman problem We are given a complete directed graph with non-negative weights on edges, and we must find a maximum weight Hamiltonian

More information

Know the Well-ordering principle: Any set of positive integers which has at least one element contains a smallest element.

Know the Well-ordering principle: Any set of positive integers which has at least one element contains a smallest element. The first exam will be on Monday, June 8, 202. The syllabus will be sections. and.2 in Lax, and the number theory handout found on the class web site, plus the handout on the method of successive squaring

More information

Lecture 5. 1 Review (Pairwise Independence and Derandomization)

Lecture 5. 1 Review (Pairwise Independence and Derandomization) 6.842 Randomness and Computation September 20, 2017 Lecture 5 Lecturer: Ronitt Rubinfeld Scribe: Tom Kolokotrones 1 Review (Pairwise Independence and Derandomization) As we discussed last time, we can

More information

The running time of Euclid s algorithm

The running time of Euclid s algorithm The running time of Euclid s algorithm We analyze the worst-case running time of EUCLID as a function of the size of and Assume w.l.g. that 0 The overall running time of EUCLID is proportional to the number

More information

Lecture 22: Counting

Lecture 22: Counting CS 710: Complexity Theory 4/8/2010 Lecture 22: Counting Instructor: Dieter van Melkebeek Scribe: Phil Rydzewski & Chi Man Liu Last time we introduced extractors and discussed two methods to construct them.

More information

Coding Theory and Applications. Solved Exercises and Problems of Cyclic Codes. Enes Pasalic University of Primorska Koper, 2013

Coding Theory and Applications. Solved Exercises and Problems of Cyclic Codes. Enes Pasalic University of Primorska Koper, 2013 Coding Theory and Applications Solved Exercises and Problems of Cyclic Codes Enes Pasalic University of Primorska Koper, 2013 Contents 1 Preface 3 2 Problems 4 2 1 Preface This is a collection of solved

More information

Pseudorandomness. Contents

Pseudorandomness. Contents Foundations and Trends R in Theoretical Computer Science Vol. 7, Nos. 1 3 (2011) 1 336 c 2012 S. P. Vadhan DOI: 10.1561/0400000010 Pseudorandomness By Salil P. Vadhan Contents 1 Introduction 2 1.1 Overview

More information

Last time, we described a pseudorandom generator that stretched its truly random input by one. If f is ( 1 2

Last time, we described a pseudorandom generator that stretched its truly random input by one. If f is ( 1 2 CMPT 881: Pseudorandomness Prof. Valentine Kabanets Lecture 20: N W Pseudorandom Generator November 25, 2004 Scribe: Ladan A. Mahabadi 1 Introduction In this last lecture of the course, we ll discuss the

More information

Approximation Algorithms and Hardness of Approximation. IPM, Jan Mohammad R. Salavatipour Department of Computing Science University of Alberta

Approximation Algorithms and Hardness of Approximation. IPM, Jan Mohammad R. Salavatipour Department of Computing Science University of Alberta Approximation Algorithms and Hardness of Approximation IPM, Jan 2006 Mohammad R. Salavatipour Department of Computing Science University of Alberta 1 Introduction For NP-hard optimization problems, we

More information

Lower Bounds for Testing Bipartiteness in Dense Graphs

Lower Bounds for Testing Bipartiteness in Dense Graphs Lower Bounds for Testing Bipartiteness in Dense Graphs Andrej Bogdanov Luca Trevisan Abstract We consider the problem of testing bipartiteness in the adjacency matrix model. The best known algorithm, due

More information

MATH 363: Discrete Mathematics

MATH 363: Discrete Mathematics MATH 363: Discrete Mathematics Learning Objectives by topic The levels of learning for this class are classified as follows. 1. Basic Knowledge: To recall and memorize - Assess by direct questions. The

More information

Notes for Lecture 3... x 4

Notes for Lecture 3... x 4 Stanford University CS254: Computational Complexity Notes 3 Luca Trevisan January 18, 2012 Notes for Lecture 3 In this lecture we introduce the computational model of boolean circuits and prove that polynomial

More information

* 8 Groups, with Appendix containing Rings and Fields.

* 8 Groups, with Appendix containing Rings and Fields. * 8 Groups, with Appendix containing Rings and Fields Binary Operations Definition We say that is a binary operation on a set S if, and only if, a, b, a b S Implicit in this definition is the idea that

More information

Mathematics Masters Examination

Mathematics Masters Examination Mathematics Masters Examination OPTION 4 Fall 2015 MATHEMATICAL COMPUTER SCIENCE NOTE: Any student whose answers require clarification may be required to submit to an oral examination. Each of the twelve

More information

Formalizing Randomized Matching Algorithms

Formalizing Randomized Matching Algorithms Formalizing Randomized Matching Algorithms Stephen Cook Joint work with Dai Tri Man Lê Department of Computer Science University of Toronto Canada The Banff Workshop on Proof Complexity 2011 1 / 15 Feasible

More information

Graph fundamentals. Matrices associated with a graph

Graph fundamentals. Matrices associated with a graph Graph fundamentals Matrices associated with a graph Drawing a picture of a graph is one way to represent it. Another type of representation is via a matrix. Let G be a graph with V (G) ={v 1,v,...,v n

More information

Algebraic structures I

Algebraic structures I MTH5100 Assignment 1-10 Algebraic structures I For handing in on various dates January March 2011 1 FUNCTIONS. Say which of the following rules successfully define functions, giving reasons. For each one

More information

Lecture 5 Polynomial Identity Testing

Lecture 5 Polynomial Identity Testing Lecture 5 Polynomial Identity Testing Michael P. Kim 18 April 2017 1 Outline and Motivation In this lecture we will cover a fundamental problem in complexity theory polynomial identity testing (PIT). We

More information

Lecture 6: Finite Fields

Lecture 6: Finite Fields CCS Discrete Math I Professor: Padraic Bartlett Lecture 6: Finite Fields Week 6 UCSB 2014 It ain t what they call you, it s what you answer to. W. C. Fields 1 Fields In the next two weeks, we re going

More information

Lecture Notes 20: Zero-Knowledge Proofs

Lecture Notes 20: Zero-Knowledge Proofs CS 127/CSCI E-127: Introduction to Cryptography Prof. Salil Vadhan Fall 2013 Lecture Notes 20: Zero-Knowledge Proofs Reading. Katz-Lindell Ÿ14.6.0-14.6.4,14.7 1 Interactive Proofs Motivation: how can parties

More information

6.842 Randomness and Computation March 3, Lecture 8

6.842 Randomness and Computation March 3, Lecture 8 6.84 Randomness and Computation March 3, 04 Lecture 8 Lecturer: Ronitt Rubinfeld Scribe: Daniel Grier Useful Linear Algebra Let v = (v, v,..., v n ) be a non-zero n-dimensional row vector and P an n n

More information

2 Evidence that Graph Isomorphism is not NP-complete

2 Evidence that Graph Isomorphism is not NP-complete Topics in Theoretical Computer Science April 11, 2016 Lecturer: Ola Svensson Lecture 7 (Notes) Scribes: Ola Svensson Disclaimer: These notes were written for the lecturer only and may contain inconsistent

More information

A Linear Round Lower Bound for Lovasz-Schrijver SDP Relaxations of Vertex Cover

A Linear Round Lower Bound for Lovasz-Schrijver SDP Relaxations of Vertex Cover A Linear Round Lower Bound for Lovasz-Schrijver SDP Relaxations of Vertex Cover Grant Schoenebeck Luca Trevisan Madhur Tulsiani Abstract We study semidefinite programming relaxations of Vertex Cover arising

More information

Higher-Dimensional Analogues of the Combinatorial Nullstellensatz

Higher-Dimensional Analogues of the Combinatorial Nullstellensatz Higher-Dimensional Analogues of the Combinatorial Nullstellensatz Jake Mundo July 21, 2016 Abstract The celebrated Combinatorial Nullstellensatz of Alon describes the form of a polynomial which vanishes

More information

1 The Low-Degree Testing Assumption

1 The Low-Degree Testing Assumption Advanced Complexity Theory Spring 2016 Lecture 17: PCP with Polylogarithmic Queries and Sum Check Prof. Dana Moshkovitz Scribes: Dana Moshkovitz & Michael Forbes Scribe Date: Fall 2010 In this lecture

More information

Finite fields, randomness and complexity. Swastik Kopparty Rutgers University

Finite fields, randomness and complexity. Swastik Kopparty Rutgers University Finite fields, randomness and complexity Swastik Kopparty Rutgers University This talk Three great problems: Polynomial factorization Epsilon-biased sets Function uncorrelated with low-degree polynomials

More information

Lecture 17. In this lecture, we will continue our discussion on randomization.

Lecture 17. In this lecture, we will continue our discussion on randomization. ITCS:CCT09 : Computational Complexity Theory May 11, 2009 Lecturer: Jayalal Sarma M.N. Lecture 17 Scribe: Hao Song In this lecture, we will continue our discussion on randomization. 1 BPP and the Polynomial

More information

ACO Comprehensive Exam October 14 and 15, 2013

ACO Comprehensive Exam October 14 and 15, 2013 1. Computability, Complexity and Algorithms (a) Let G be the complete graph on n vertices, and let c : V (G) V (G) [0, ) be a symmetric cost function. Consider the following closest point heuristic for

More information

Error Correcting Codes: Combinatorics, Algorithms and Applications Spring Homework Due Monday March 23, 2009 in class

Error Correcting Codes: Combinatorics, Algorithms and Applications Spring Homework Due Monday March 23, 2009 in class Error Correcting Codes: Combinatorics, Algorithms and Applications Spring 2009 Homework Due Monday March 23, 2009 in class You can collaborate in groups of up to 3. However, the write-ups must be done

More information

Reading group: proof of the PCP theorem

Reading group: proof of the PCP theorem Reading group: proof of the PCP theorem 1 The PCP Theorem The usual formulation of the PCP theorem is equivalent to: Theorem 1. There exists a finite constraint language A such that the following problem

More information

Lecture 10 Oct. 3, 2017

Lecture 10 Oct. 3, 2017 CS 388R: Randomized Algorithms Fall 2017 Lecture 10 Oct. 3, 2017 Prof. Eric Price Scribe: Jianwei Chen, Josh Vekhter NOTE: THESE NOTES HAVE NOT BEEN EDITED OR CHECKED FOR CORRECTNESS 1 Overview In the

More information

Lecture 4: Two-point Sampling, Coupon Collector s problem

Lecture 4: Two-point Sampling, Coupon Collector s problem Randomized Algorithms Lecture 4: Two-point Sampling, Coupon Collector s problem Sotiris Nikoletseas Associate Professor CEID - ETY Course 2013-2014 Sotiris Nikoletseas, Associate Professor Randomized Algorithms

More information

Model Counting for Logical Theories

Model Counting for Logical Theories Model Counting for Logical Theories Wednesday Dmitry Chistikov Rayna Dimitrova Department of Computer Science University of Oxford, UK Max Planck Institute for Software Systems (MPI-SWS) Kaiserslautern

More information

PRGs for space-bounded computation: INW, Nisan

PRGs for space-bounded computation: INW, Nisan 0368-4283: Space-Bounded Computation 15/5/2018 Lecture 9 PRGs for space-bounded computation: INW, Nisan Amnon Ta-Shma and Dean Doron 1 PRGs Definition 1. Let C be a collection of functions C : Σ n {0,

More information

Lecture 12: Interactive Proofs

Lecture 12: Interactive Proofs princeton university cos 522: computational complexity Lecture 12: Interactive Proofs Lecturer: Sanjeev Arora Scribe:Carl Kingsford Recall the certificate definition of NP. We can think of this characterization

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

Randomized Algorithms

Randomized Algorithms Randomized Algorithms Prof. Tapio Elomaa tapio.elomaa@tut.fi Course Basics A new 4 credit unit course Part of Theoretical Computer Science courses at the Department of Mathematics There will be 4 hours

More information

Lecture 6 Positive Definite Matrices

Lecture 6 Positive Definite Matrices Linear Algebra Lecture 6 Positive Definite Matrices Prof. Chun-Hung Liu Dept. of Electrical and Computer Engineering National Chiao Tung University Spring 2017 2017/6/8 Lecture 6: Positive Definite Matrices

More information

Discrete Mathematics and Probability Theory Fall 2018 Alistair Sinclair and Yun Song Note 6

Discrete Mathematics and Probability Theory Fall 2018 Alistair Sinclair and Yun Song Note 6 CS 70 Discrete Mathematics and Probability Theory Fall 2018 Alistair Sinclair and Yun Song Note 6 1 Modular Arithmetic In several settings, such as error-correcting codes and cryptography, we sometimes

More information

Solutions 2017 AB Exam

Solutions 2017 AB Exam 1. Solve for x : x 2 = 4 x. Solutions 2017 AB Exam Texas A&M High School Math Contest October 21, 2017 ANSWER: x = 3 Solution: x 2 = 4 x x 2 = 16 8x + x 2 x 2 9x + 18 = 0 (x 6)(x 3) = 0 x = 6, 3 but x

More information

The knapsack Problem

The knapsack Problem There is a set of n items. The knapsack Problem Item i has value v i Z + and weight w i Z +. We are given K Z + and W Z +. knapsack asks if there exists a subset S {1, 2,..., n} such that i S w i W and

More information

Randomness and non-uniformity

Randomness and non-uniformity Randomness and non-uniformity JASS 2006 Course 1: Proofs and Computers Felix Weninger TU München April 2006 Outline Randomized computation 1 Randomized computation 2 Computation with advice Non-uniform

More information

Introduction Long transparent proofs The real PCP theorem. Real Number PCPs. Klaus Meer. Brandenburg University of Technology, Cottbus, Germany

Introduction Long transparent proofs The real PCP theorem. Real Number PCPs. Klaus Meer. Brandenburg University of Technology, Cottbus, Germany Santaló s Summer School, Part 3, July, 2012 joint work with Martijn Baartse (work supported by DFG, GZ:ME 1424/7-1) Outline 1 Introduction 2 Long transparent proofs for NP R 3 The real PCP theorem First

More information

6.045: Automata, Computability, and Complexity (GITCS) Class 17 Nancy Lynch

6.045: Automata, Computability, and Complexity (GITCS) Class 17 Nancy Lynch 6.045: Automata, Computability, and Complexity (GITCS) Class 17 Nancy Lynch Today Probabilistic Turing Machines and Probabilistic Time Complexity Classes Now add a new capability to standard TMs: random

More information

Laplacian Integral Graphs with Maximum Degree 3

Laplacian Integral Graphs with Maximum Degree 3 Laplacian Integral Graphs with Maximum Degree Steve Kirkland Department of Mathematics and Statistics University of Regina Regina, Saskatchewan, Canada S4S 0A kirkland@math.uregina.ca Submitted: Nov 5,

More information

: Error Correcting Codes. October 2017 Lecture 1

: Error Correcting Codes. October 2017 Lecture 1 03683072: Error Correcting Codes. October 2017 Lecture 1 First Definitions and Basic Codes Amnon Ta-Shma and Dean Doron 1 Error Correcting Codes Basics Definition 1. An (n, K, d) q code is a subset of

More information

Approximation Algorithms

Approximation Algorithms Approximation Algorithms Chapter 26 Semidefinite Programming Zacharias Pitouras 1 Introduction LP place a good lower bound on OPT for NP-hard problems Are there other ways of doing this? Vector programs

More information

Course 2316 Sample Paper 1

Course 2316 Sample Paper 1 Course 2316 Sample Paper 1 Timothy Murphy April 19, 2015 Attempt 5 questions. All carry the same mark. 1. State and prove the Fundamental Theorem of Arithmetic (for N). Prove that there are an infinity

More information

0.1 Rational Canonical Forms

0.1 Rational Canonical Forms We have already seen that it is useful and simpler to study linear systems using matrices. But matrices are themselves cumbersome, as they are stuffed with many entries, and it turns out that it s best

More information

Random Lifts of Graphs

Random Lifts of Graphs 27th Brazilian Math Colloquium, July 09 Plan of this talk A brief introduction to the probabilistic method. A quick review of expander graphs and their spectrum. Lifts, random lifts and their properties.

More information

Finite Fields. Mike Reiter

Finite Fields. Mike Reiter 1 Finite Fields Mike Reiter reiter@cs.unc.edu Based on Chapter 4 of: W. Stallings. Cryptography and Network Security, Principles and Practices. 3 rd Edition, 2003. Groups 2 A group G, is a set G of elements

More information

A Lower Bound for the Size of Syntactically Multilinear Arithmetic Circuits

A Lower Bound for the Size of Syntactically Multilinear Arithmetic Circuits A Lower Bound for the Size of Syntactically Multilinear Arithmetic Circuits Ran Raz Amir Shpilka Amir Yehudayoff Abstract We construct an explicit polynomial f(x 1,..., x n ), with coefficients in {0,

More information

Rings. EE 387, Notes 7, Handout #10

Rings. EE 387, Notes 7, Handout #10 Rings EE 387, Notes 7, Handout #10 Definition: A ring is a set R with binary operations, + and, that satisfy the following axioms: 1. (R, +) is a commutative group (five axioms) 2. Associative law for

More information

arxiv: v1 [math.co] 3 Nov 2014

arxiv: v1 [math.co] 3 Nov 2014 SPARSE MATRICES DESCRIBING ITERATIONS OF INTEGER-VALUED FUNCTIONS BERND C. KELLNER arxiv:1411.0590v1 [math.co] 3 Nov 014 Abstract. We consider iterations of integer-valued functions φ, which have no fixed

More information