Pr[X = s Y = t] = Pr[X = s] Pr[Y = t]
|
|
- Allyson Bryant
- 5 years ago
- Views:
Transcription
1 Homework 4 By: John Steinberger Problem 1. Recall that a real n n matrix A is positive semidefinite if A is symmetric and x T Ax 0 for all x R n. Assume A is a real n n matrix. Show TFAE 1 : (a) A is positive semidefinite (b) A is symmetric and the eigenvalues of A are all nonnegative ( 0) (c) A can be written as B T B for some n n upper triangular matrix B (you can use last homework s result; vocabulary: B T B is called the Cholesky decomposition of A) (d) there exist vectors x 1,..., x n R n such that A = n i=1 x ix T i (note: in fact if k = rank(a), there exists k vectors x 1,..., x k R k such that A = k i=1 x ix T i ; see Part C of Problem 1 of Homework ) (e) there exists a symmetric matrix X such that X 2 = A (therefore A has a square root ); moreover, X is positive semidefinite (f) A is symmetric and every symmetric minor of A has determinant 0 (a symmetric minor of A is the restriction of A to rows and columns belonging to some set J [n]; for example if J = {1, 2} then the symmetric minor is the upper left-hand corner 2 2 submatrix of A) (g) A can be written as P 1 DP where P is an orthogonal matrix 2 and D is a diagonal matrix with nonnegative entries on its diagonal (h) A is symmetric and A, X 0 for all n n positive semidefinite matrices X, where A, X is the dot product of A and X considered as vectors: A, X = i,j A ijx ij Problem 2. Let X and Y be random variables taking values in some set S. We say X is equidistributed to X if Pr[X = s] = Pr[ X = s] for all s S. The notation X X means that X and X and equidistributed. We say X and Y are independent if Pr[X = s Y = t] = Pr[X = s] Pr[Y = t] for all s, t S. For this problem, X and Y may or may not be independent; it doesn t matter. Part A. Prove there exists a pair of random variables ( X, Ỹ ) such that X X, Ỹ Y, and such that Pr[ X Ỹ ] = (X, Y ), where (X, Y ) is the statistical distance between X and Y. Note: For the expression Pr[ X = Ỹ ] to make sense, X and Ỹ need to be defined over the same probability space. You can construct this probability space however you want. All we want is that X X, Ỹ Y, and that Pr[ X Ỹ ] = (X, Y ). Other note: the pair ( X, Ỹ ) is called a coupling of X and Y. Part B. Prove that Pr[ X Ỹ ] (X, Y ) for any pair of random variables ( X, Ỹ ) (defined over the same probability space) such that X X, Ỹ Y. Part C. Conclude by parts A and B that (X, Y ) = min X X,Ỹ Y Pr[ X Ỹ ] where the notation means that the min is taken over all pairs of random variables ( X, Ỹ ) defined over a common probability space and such that X X, Ỹ Y. Part D. Use Part C to give a different proof that for any distinguisher D : S {0, 1}. D (X, Y ) := Pr[D(X) = 1] Pr[D(Y ) = 1] (X, Y ) Problem. If X is a random variable, we write X r for a random variable composed of r independent copies of X. That is, X r = (X 1,..., X r ) where X i X for each i and the X i s are independent. (Note: A 1 TFAE: The Following Are Equivalent. 2 See the last homework for the definition of orthogonal matrix. 1
2 random variable of the form Z = (Z 1,..., Z r ) with the coordinates are independent i.e. where the Z i s are independent is sometimes called a product distribution.) Say (X, Y ) = ε. How large could (X r, Y r ) be? How small? Find examples maximizing and minimizing (X r, Y r ), where the only constraint is that (X, Y ) = ε. You do not need to prove your examples are best possible. Also, you may find it hard to exactly compute (X r, Y r ). In this case, for your examples, simply mention how large r must be, as a function of ε, such that (X r, Y r ) = Ω(1). Problem 4. For this problem, all the random variables we consider have range [N] = {1,...,N}. A random variable X is a convex combination of random variables Y 1,..., Y t if there exist numbers w 1,..., w t, 0 w i, t i=1 w i = 1, such that t Pr[X = i] = w j Pr[Y j = i] for all i [N]. Let v = (v 1,..., v N ) R N. We define v(x) = j=1 N v i Pr[X = i]. i=1 (Say that v i is the value of i [N]; then v(x) is the expected value obtained by selecting i according to X. The vector v is also called a payoff vector or utility vector.) Part A. Show that X is a convex combination of Y 1,..., Y k if and only if v(x) min j v(y j ) for all v R N. (One could also say: if and only if v(x) max j v(y j ) for all v R N.) Part B. Let K N. Say that X is K-flat if for every i [N], either Pr[X = i] = 0 or Pr[X = i] = 1 K. (This implies X is uniformly distributed over a subset of [N] of size K.) The min-entropy of X, written H (X), is defined as ( H (X) = min log 2 x [N] 1 Pr[X = x] (So if X is K-flat, then H (X) = log 2 (K).) Show that H (X) log 2 (K) if and only if X is a convex combination of K-flat sources. Problem 5. (Note: If you want to start thinking right away about the problem, you can jump forward to where it says Toy version below.) The complexity class BPP is called a two-sided error complexity class because we are allowed to make mistakes for both yes and no answers. The one-sided error analogues of BPP are two complexity classes called RP and corp. A language L is in RP if and only there exists a probabilistic polynomial-time Turing machine M such that x / L = Pr[M(x) = 0] = 1 ). x L = Pr[M(x) = 1] 0.5. This is the short way of saying the definition. More formally, since M is a probabilistic Turing machine, it takes two inputs: the normal input x and the random tape y. Then M is probabilistic polynomial-time if there exists a polynomial p(n) such that M(x, y) halts in at most p( x ) steps for all x {0, 1}, for any y {0, 1} p( x ). Note that we can assume the random tape has length p( x ) since anyway M cannot read more than p(n) bits of the random tape in p(n) steps. (Indeed, I did something a bit stupid when I defined Source is a synonym of random variable. 2
3 BPP in HW 1, because I allowed the random tape to have a different length than the running time of the machine; normally, one simply considers the random tape to have the same length as the running time.) In any case, the more formal way of writing things is to say that L RP if and only there exists a PPTM (Probabilistic Polynomial-time Turing Machine) M of running time p(n) such that x / L = Pr y {0,1} p( x ) [M(x, y) = 0] = 1 x L = Pr y {0,1} p( x ) [M(x, y) = 1] 0.5 for every x {0, 1}. For corp, it s the opposite: a language L is in corp if and only if there exists a PPTM M of running time p(n) such that x / L = Pr y {0,1} p( x ) [M(x, y) = 0] 0.5 x L = Pr y {0,1} p( x ) [M(x, y) = 1] = 1 for every x {0, 1}. Note that for RP, if M outputs 0 then maybe it has made a mistake (because it is allowed to output 0 even if the correct answer is 1). For corp, if M outputs 1 then maybe it has made a mistake. Put otherwise: for RP, if M outputs 1 then it is sure of its answer for corp, if M outputs 0 then it is sure of its answer This is how I remember which is RP and which is corp: RP is the one where you need to be sure when you say yes (when you say yes it means you ve actually understood something about your input). And corp is the one where you need to be sure when you say no (if you say no it means you ve actually understood something about your input). Another way to remember is that the definitions have been made so that RP NP and corp conp. (Check those inclusions for yourself.) The constant 0.5 in the definition could be changed to 2 or 9 10 or 1 10 without affecting which languages are in RP an corp, because we can apply error reduction to navigate between these constants. For example, if a PPTM M is such that x / L = Pr[M(x) = 0] = 1 x L = Pr[M(x) = 1] 0.01 then applying error reduction (more precisely, repeating M 100 times on each input, and outputting 1 if and only if M ever outputs 1) one finds a PPTM M such that x / L = Pr[M (x) = 0] = 1 x L = Pr[M (x) = 1] 1 (1 0.01) /e. Repeating M 200 times we would get Pr[M (x) = 1] 1 1/e 2, and so forth. In fact, we can even apply error reduction starting from a probability of correctness that is only inversepolynomially bounded away from 0; more precisely, say M is a PPTM and q(n) > 0 is a polynomial such that x / L = Pr[M(x) = 0] = 1 x L = Pr[M(x) = 1] 1 q(n) for all {0, 1}. Then if we build a machine M by running M q(n) 2 times (and answering yes if M answers yes a single time), then M is also a PPTM and we have x / L = Pr[M (x) = 0] = 1, x L = Pr[M (x) = 1] 1 e q(n).
4 Since the constant 0.5 can be replaced by 2, we have RP BPP and corp BPP. This is normal: if you re allowed two-sided error then it s easier than if you re only allowed one-sided error. We also have P RP, P corp. So we can write the pretty chain: P RP corp RP corp BPP. (Pretty, but completely useless. In fact, people believe that P = BPP, even if they are not able to prove it. Thus people really believe that P = RP = corp = BPP.) To illustrate corp, consider the language of PRIMES, which is the set of binary strings whose value, as an integer, is a prime number. Actually, people now know that PRIMES P, but let s ignore that right now. As first-year students you learned the Miller-Rabin primality test which, if it answers no, has detected that its input is composite (non-prime). Thus, this test shows PRIMES corp. As another example, consider the language ZEROPOLY consisting of strings that encode an algebraic circuit (that s like a boolean circuit, but now the gates are and + instead of and and ) over the field F 2 = {0, 1} that computes a symbolic polynomial that is the 0 polynomial. (A symbolic polynomial: over the field F 2, the polynomial x 2 + x is always equal to 0, so as a function it is the 0 function; but as a symbolic polynomial i.e. as a sequence of coefficients this polynomial is not the 0 polynomial.) For example, consider these two algebraic circuits: a x 1 x 2 x 1 x 2 The circuit on the left computes the polynomial x 1 x 2 + x 1 x 2 = 0 which is (symbolically) the 0 polynomial. The circuit on the right computes the polynomial (x 1 + x 1 x 2 )x 2 = x 1 x 2 + x 1 x 2 2. Even though this second polynomial is always equal to 0 over F 2, it is not, symbolically, the 0 polynomial. So the circuit on the left is in ZEROPOLY but the circuit on the right is not in ZEROPOLY. Note that a circuit which takes n bits to describe will have depth at most n and so will give a polynomial of degree d 2 n (assuming at most two input wires per gate). To test if a circuit is symbolically 0, we take each input gate 4 x i at random from the finite field F 2 2n of size 2 2n, and apply the circuit to these inputs, working over F 2 2n. If the circuit is symbolically 0, we will obtain 0; otherwise, since the polynomial computed has degree at most 2 n, the Schwartz-Zippel lemma implies that there is chance at most 2 n /2 2n = 1/2 n of obtaining 0. Note that it takes 2n = O(n) bits to represent an element of F 2 2n, so that the value of the circuit on inputs in F 2 2n can be evaluated 5 in poly(n)-time (given that the circuit itself has O(n) gates). Thus this gives us a one-sided error algorithm, and ZEROPOLY corp. (It is when we say no that we are sure of our answer, with this algorithm.) Unlike PRIMES, which is known to be in P, ZEROPOLY is not known to be in P. (Actually, the problem of placing ZEROPOLY in P is a big research topic in TCS, known as polynomial identity testing.) So far, we have only given examples of languages in corp. But languages in RP are easy to construct from languages in corp. Indeed, one has: L RP if and only if L corp. 4 Here x i does not refer to the i-th bit of the input string! The input string describes an algebraic circuit, and has length n. The circuit which is described very likely has fewer than n input gates. We are using x i as the name of the value given to the i-th input gate. 5 A more subtle issue is whether we can find a polynomial of degree 2n over F 2 that is irreducible over F 2 we need this polynomial to construct the field F 2 2n and do computations over it! Thankfully, such an irreducible polynomial can be found in poly(n) time. b 4
5 You can check this for yourself. So the language NONZEROPOLY (with obvious definition) is in RP. But ZEROPOLY is not known to be in RP, and (symmetrically) NONZEROPOLY is not known to be in corp. There is one more natural complexity class along the same lines, known as ZPP. This is the class of languages for which there is a PPTM that never makes an error, but that is sometimes undecided. Formally, a L ZPP if and only if there exists a PPTM M such that x / L = Pr[M(x) = 0] 1, Pr[M(x) = 1] = 0, x L = Pr[M(x) = 1] 1, Pr[M(x) = 0] = 0. Here M is allowed to answer a special symbol which means I don t know. Thus with probability 1, M answers correctly, and M never outputs the wrong answer, but M is also allowed to output no answer at all. By error reduction, the fraction 1 could be replaced with For fun, why don t you decide whether: Also decide which of these is true: ZPP BPP or BPP ZPP. RP corp ZPP RP corp = ZPP ZPP RP corp. (If the middle one is true then of course all three are true.) (These questions are not part of Problem 5.) OK... In the first homework, in Problem 5, we considered error reduction for BPP. In the case of RP and corp, doing error reduction is much more trivial. (For BPP you need to use a Chernoff bound to compute by how much error is reduced when you repeat the algorithm a large number of times; for RP and corp, you just need to know the formula (1 a) b e ab.) In this problem we will not consider by how much can error be reduced using repetition for RP and corp (which is kind of stupid), but how much randomness is really needed to reduce the error by a certain amount. Basically, we are going to consider the question of randomness efficient error reduction (and in particular, if such randomness efficient error reduction is possible at all). Say that we have a language L in RP and a PPTM M of running time p(n) such that x / L = Pr r {0,1} p( x ) [M(x, r) = 0] = 1 x L = Pr r {0,1} p( x ) [M(x, r) = 1] 0.5 for every x {0, 1}. Fix some x {0, 1} and let n = x. If x / L then there is no problem, M will always be right, so assume that x L. Then there is a set of B {0, 1} p(n) of bad random strings such that M(x, r) = 0 if r B, and we know B 2 p(n) /2. If we repeat M t times on t entirely independent random strings r 1,..., r t, the probability of being unlucky all t times (i.e., having r 1,...,r t B, so that we still answer 0 in the end) is 0.5 t. This is our probability of error after t (independent) repetitions. Note that sampling r 1,...,r t costs us tp(n) random coins. That is: For t independent repetitions of M, we achieve error 0.5 t, and using tp(n) random coins. Still repeating M t times, but on dependent strings r 1,..., r t (dependent strings might require less coins to sample!), we would like to know if we can achieve error something like 0.5 t (maybe c t for some other constant c < 1, like c = 2 ), while using substantially fewer than tp(n) coins to sample the strings r 1,..., r t. Basically, this question: For t non-independent repetitions of M, can we achieve (say) error (2/) t, with much fewer than tp(n) random coins? 5
6 Note that the set B of bad random strings could change for each x, so we basically have no knowledge of what B is. Our method should work for any set B of bad strings such that B 2 p(n) /2. Solving this problem actually requires tools you haven t learned. All we are going to do now is to look at a toy version of the problem, just to see if the problem could even possibly have a good solution. The toy version is the following: Toy version. Let R = {0, 1} p be the set of random strings 6. Let m be a number of random coins. Say a function f : {0, 1} m R t is a good sampling method if Pr z {0,1} m[f(z) Bt ] ( ) t 2 for any set B R such that B / R 0.5. Find the smallest m that you can such that a good sampling method exists. (End Toy Version.) Well, that s it! That s Problem 5. Note that R t means the set {(r 1,...,r t ) : r i R, 1 i t} and that the event f(z) B t is exactly the bad event that each random string r i in f(z) =: (r 1,..., r t ) is in B. We called the above a toy version of the problem because it only considers whether f exists, and not whether f is actually easy to compute. To be useful in the real world, f should be a polytime function, but for the toy version we don t worry about that. (Actually, this last comment should give you a big hint as to how to construct f.) Bonus Part: Replace 2 by (1 + c)0.5, for fixed c > 0, and see how m depends on c. (This means, (2/) t is replaced by [(1 + c)/2] t.) You should consider that c > 0 is fixed, and that t. Note: I am not really sure, but maybe for this problem you will get better bounds by using the following strong Chernoff bounds : If X 1,..., X n [0, 1] are independent random variables and if X = n i=1 X i, then we have: (a) (b) ) E[X] ( e δ Pr[X (1 + δ)e[x]] (1 + δ) 1+δ ( e δ ) E[X] Pr[X < (1 δ)e[x]] (1 δ) 1 δ e δ2e[x]. These bounds are much uglier than the usual Chernoff bound, and so less practical (usually). But maybe here they will give a better m. I m not sure, you can try. Problem 6. We have now defined a number of complexity classes that refer to probabilistic computations: BPP, RP, corp, ZPP. But the famous complexity class NP does not refer to probabilistic computation. Try to mix the definition NP with the definition of BPP (or with one of the other randomized computation classes) to create your own definition of a new complexity class, that has a bit of both. There is no right answer just try to make your complexity class a cool one. 6 We have consciously replaced p(n) by p (this is not a typo). In the toy version we fix the input to some length, and we don t need to mention dependence on n anymore. 6
CSC 5170: Theory of Computational Complexity Lecture 5 The Chinese University of Hong Kong 8 February 2010
CSC 5170: Theory of Computational Complexity Lecture 5 The Chinese University of Hong Kong 8 February 2010 So far our notion of realistic computation has been completely deterministic: The Turing Machine
More informationCSC 5170: Theory of Computational Complexity Lecture 9 The Chinese University of Hong Kong 15 March 2010
CSC 5170: Theory of Computational Complexity Lecture 9 The Chinese University of Hong Kong 15 March 2010 We now embark on a study of computational classes that are more general than NP. As these classes
More informationUmans Complexity Theory Lectures
Umans Complexity Theory Lectures Lecture 8: Introduction to Randomized Complexity: - Randomized complexity classes, - Error reduction, - in P/poly - Reingold s Undirected Graph Reachability in RL Randomized
More information1 Randomized Computation
CS 6743 Lecture 17 1 Fall 2007 1 Randomized Computation Why is randomness useful? Imagine you have a stack of bank notes, with very few counterfeit ones. You want to choose a genuine bank note to pay at
More information6.045: Automata, Computability, and Complexity (GITCS) Class 17 Nancy Lynch
6.045: Automata, Computability, and Complexity (GITCS) Class 17 Nancy Lynch Today Probabilistic Turing Machines and Probabilistic Time Complexity Classes Now add a new capability to standard TMs: random
More informationLecture Examples of problems which have randomized algorithms
6.841 Advanced Complexity Theory March 9, 2009 Lecture 10 Lecturer: Madhu Sudan Scribe: Asilata Bapat Meeting to talk about final projects on Wednesday, 11 March 2009, from 5pm to 7pm. Location: TBA. Includes
More informationNotes for Lecture 3... x 4
Stanford University CS254: Computational Complexity Notes 3 Luca Trevisan January 18, 2012 Notes for Lecture 3 In this lecture we introduce the computational model of boolean circuits and prove that polynomial
More informationLecture 24: Randomized Complexity, Course Summary
6.045 Lecture 24: Randomized Complexity, Course Summary 1 1/4 1/16 1/4 1/4 1/32 1/16 1/32 Probabilistic TMs 1/16 A probabilistic TM M is a nondeterministic TM where: Each nondeterministic step is called
More informationLecture 5. 1 Review (Pairwise Independence and Derandomization)
6.842 Randomness and Computation September 20, 2017 Lecture 5 Lecturer: Ronitt Rubinfeld Scribe: Tom Kolokotrones 1 Review (Pairwise Independence and Derandomization) As we discussed last time, we can
More informationLecture 17. In this lecture, we will continue our discussion on randomization.
ITCS:CCT09 : Computational Complexity Theory May 11, 2009 Lecturer: Jayalal Sarma M.N. Lecture 17 Scribe: Hao Song In this lecture, we will continue our discussion on randomization. 1 BPP and the Polynomial
More informationRandomized Complexity Classes; RP
Randomized Complexity Classes; RP Let N be a polynomial-time precise NTM that runs in time p(n) and has 2 nondeterministic choices at each step. N is a polynomial Monte Carlo Turing machine for a language
More informationHandout 5. α a1 a n. }, where. xi if a i = 1 1 if a i = 0.
Notes on Complexity Theory Last updated: October, 2005 Jonathan Katz Handout 5 1 An Improved Upper-Bound on Circuit Size Here we show the result promised in the previous lecture regarding an upper-bound
More information1 Randomized complexity
80240233: Complexity of Computation Lecture 6 ITCS, Tsinghua Univesity, Fall 2007 23 October 2007 Instructor: Elad Verbin Notes by: Zhang Zhiqiang and Yu Wei 1 Randomized complexity So far our notion of
More informationU.C. Berkeley CS278: Computational Complexity Professor Luca Trevisan 1/29/2002. Notes for Lecture 3
U.C. Bereley CS278: Computational Complexity Handout N3 Professor Luca Trevisan 1/29/2002 Notes for Lecture 3 In this lecture we will define the probabilistic complexity classes BPP, RP, ZPP and we will
More informationNotes on Complexity Theory Last updated: November, Lecture 10
Notes on Complexity Theory Last updated: November, 2015 Lecture 10 Notes by Jonathan Katz, lightly edited by Dov Gordon. 1 Randomized Time Complexity 1.1 How Large is BPP? We know that P ZPP = RP corp
More informationRandomness and non-uniformity
Randomness and non-uniformity JASS 2006 Course 1: Proofs and Computers Felix Weninger TU München April 2006 Outline Randomized computation 1 Randomized computation 2 Computation with advice Non-uniform
More informationconp = { L L NP } (1) This problem is essentially the same as SAT because a formula is not satisfiable if and only if its negation is a tautology.
1 conp and good characterizations In these lecture notes we discuss a complexity class called conp and its relationship to P and NP. This discussion will lead to an interesting notion of good characterizations
More informationRandomized Computation
Randomized Computation Slides based on S.Aurora, B.Barak. Complexity Theory: A Modern Approach. Ahto Buldas Ahto.Buldas@ut.ee We do not assume anything about the distribution of the instances of the problem
More informationLecture 12: Randomness Continued
CS 710: Complexity Theory 2/25/2010 Lecture 12: Randomness Continued Instructor: Dieter van Melkebeek Scribe: Beth Skubak & Nathan Collins In the last lecture we introduced randomized computation in terms
More informationLecture Notes: The Halting Problem; Reductions
Lecture Notes: The Halting Problem; Reductions COMS W3261 Columbia University 20 Mar 2012 1 Review Key point. Turing machines can be encoded as strings, and other Turing machines can read those strings
More informationProblem Set 2. Assigned: Mon. November. 23, 2015
Pseudorandomness Prof. Salil Vadhan Problem Set 2 Assigned: Mon. November. 23, 2015 Chi-Ning Chou Index Problem Progress 1 SchwartzZippel lemma 1/1 2 Robustness of the model 1/1 3 Zero error versus 1-sided
More informationCS154 Final Examination
CS154 Final Examination June 7, 2010, 7-10PM Directions: CS154 students: answer all 13 questions on this paper. Those taking CS154N should answer only questions 8-13. The total number of points on this
More informationx 1 + x 2 2 x 1 x 2 1 x 2 2 min 3x 1 + 2x 2
Lecture 1 LPs: Algebraic View 1.1 Introduction to Linear Programming Linear programs began to get a lot of attention in 1940 s, when people were interested in minimizing costs of various systems while
More information6-1 Computational Complexity
6-1 Computational Complexity 6. Computational Complexity Computational models Turing Machines Time complexity Non-determinism, witnesses, and short proofs. Complexity classes: P, NP, conp Polynomial-time
More information6.895 Randomness and Computation March 19, Lecture Last Lecture: Boosting Weak Learners Into Strong Learners
6.895 Randomness and Computation March 9, 2008 Lecture 3 Lecturer: Ronitt Rubinfeld Scribe: Edwin Chen Overview. Last Lecture: Boosting Weak Learners Into Strong Learners In the last two lectures, we showed
More information6.842 Randomness and Computation Lecture 5
6.842 Randomness and Computation 2012-02-22 Lecture 5 Lecturer: Ronitt Rubinfeld Scribe: Michael Forbes 1 Overview Today we will define the notion of a pairwise independent hash function, and discuss its
More informationCS 151 Complexity Theory Spring Solution Set 5
CS 151 Complexity Theory Spring 2017 Solution Set 5 Posted: May 17 Chris Umans 1. We are given a Boolean circuit C on n variables x 1, x 2,..., x n with m, and gates. Our 3-CNF formula will have m auxiliary
More informationWeek 2: Defining Computation
Computational Complexity Theory Summer HSSP 2018 Week 2: Defining Computation Dylan Hendrickson MIT Educational Studies Program 2.1 Turing Machines Turing machines provide a simple, clearly defined way
More informationShow that the following problems are NP-complete
Show that the following problems are NP-complete April 7, 2018 Below is a list of 30 exercises in which you are asked to prove that some problem is NP-complete. The goal is to better understand the theory
More informationChapter 5 : Randomized computation
Dr. Abhijit Das, Chapter 5 : Randomized computation Probabilistic or randomized computation often provides practical methods to solve certain computational problems. In order to define probabilistic complexity
More informationCS151 Complexity Theory. Lecture 1 April 3, 2017
CS151 Complexity Theory Lecture 1 April 3, 2017 Complexity Theory Classify problems according to the computational resources required running time storage space parallelism randomness rounds of interaction,
More informationHW11. Due: December 6, 2018
CSCI 1010 Theory of Computation HW11 Due: December 6, 2018 Attach a fully filled-in cover sheet to the front of your printed homework. Your name should not appear anywhere; the cover sheet and each individual
More informationBoolean circuits. Lecture Definitions
Lecture 20 Boolean circuits In this lecture we will discuss the Boolean circuit model of computation and its connection to the Turing machine model. Although the Boolean circuit model is fundamentally
More informationLecture 23: Introduction to Quantum Complexity Theory 1 REVIEW: CLASSICAL COMPLEXITY THEORY
Quantum Computation (CMU 18-859BB, Fall 2015) Lecture 23: Introduction to Quantum Complexity Theory November 31, 2015 Lecturer: Ryan O Donnell Scribe: Will Griffin 1 REVIEW: CLASSICAL COMPLEXITY THEORY
More informationAdleman Theorem and Sipser Gács Lautemann Theorem. CS422 Computation Theory CS422 Complexity Theory
Adleman Theorem and Sipser Gács Lautemann Theorem CS422 Computation Theory CS422 Complexity Theory Complexity classes, N, co N, SACE, EXTIME, H, R, RE, ALL, Complexity theorists: "Which is contained in
More informationPlease bring the task to your first physics lesson and hand it to the teacher.
Pre-enrolment task for 2014 entry Physics Why do I need to complete a pre-enrolment task? This bridging pack serves a number of purposes. It gives you practice in some of the important skills you will
More informationA non-turing-recognizable language
CS 360: Introduction to the Theory of Computing John Watrous, University of Waterloo A non-turing-recognizable language 1 OVERVIEW Thus far in the course we have seen many examples of decidable languages
More informationLecture 20: Goemans-Williamson MAXCUT Approximation Algorithm. 2 Goemans-Williamson Approximation Algorithm for MAXCUT
CS 80: Introduction to Complexity Theory 0/03/03 Lecture 20: Goemans-Williamson MAXCUT Approximation Algorithm Instructor: Jin-Yi Cai Scribe: Christopher Hudzik, Sarah Knoop Overview First, we outline
More informationComputer Sciences Department
Computer Sciences Department 1 Reference Book: INTRODUCTION TO THE THEORY OF COMPUTATION, SECOND EDITION, by: MICHAEL SIPSER Computer Sciences Department 3 ADVANCED TOPICS IN C O M P U T A B I L I T Y
More information-bit integers are all in ThC. Th The following problems are complete for PSPACE NPSPACE ATIME QSAT, GEOGRAPHY, SUCCINCT REACH.
CMPSCI 601: Recall From Last Time Lecture 26 Theorem: All CFL s are in sac. Facts: ITADD, MULT, ITMULT and DIVISION on -bit integers are all in ThC. Th The following problems are complete for PSPACE NPSPACE
More informationLecture 22: Quantum computational complexity
CPSC 519/619: Quantum Computation John Watrous, University of Calgary Lecture 22: Quantum computational complexity April 11, 2006 This will be the last lecture of the course I hope you have enjoyed the
More informationCMPT 710/407 - Complexity Theory Lecture 4: Complexity Classes, Completeness, Linear Speedup, and Hierarchy Theorems
CMPT 710/407 - Complexity Theory Lecture 4: Complexity Classes, Completeness, Linear Speedup, and Hierarchy Theorems Valentine Kabanets September 13, 2007 1 Complexity Classes Unless explicitly stated,
More informationCS154, Lecture 17: conp, Oracles again, Space Complexity
CS154, Lecture 17: conp, Oracles again, Space Complexity Definition: conp = { L L NP } What does a conp computation look like? In NP algorithms, we can use a guess instruction in pseudocode: Guess string
More information20.1 2SAT. CS125 Lecture 20 Fall 2016
CS125 Lecture 20 Fall 2016 20.1 2SAT We show yet another possible way to solve the 2SAT problem. Recall that the input to 2SAT is a logical expression that is the conunction (AND) of a set of clauses,
More informationCS 361 Meeting 26 11/10/17
CS 361 Meeting 26 11/10/17 1. Homework 8 due Announcements A Recognizable, but Undecidable Language 1. Last class, I presented a brief, somewhat inscrutable proof that the language A BT M = { M w M is
More informationPartial Fractions. June 27, In this section, we will learn to integrate another class of functions: the rational functions.
Partial Fractions June 7, 04 In this section, we will learn to integrate another class of functions: the rational functions. Definition. A rational function is a fraction of two polynomials. For example,
More informationP is the class of problems for which there are algorithms that solve the problem in time O(n k ) for some constant k.
Complexity Theory Problems are divided into complexity classes. Informally: So far in this course, almost all algorithms had polynomial running time, i.e., on inputs of size n, worst-case running time
More information15-251: Great Theoretical Ideas In Computer Science Recitation 9 : Randomized Algorithms and Communication Complexity Solutions
15-251: Great Theoretical Ideas In Computer Science Recitation 9 : Randomized Algorithms and Communication Complexity Solutions Definitions We say a (deterministic) protocol P computes f if (x, y) {0,
More informationconp, Oracles, Space Complexity
conp, Oracles, Space Complexity 1 What s next? A few possibilities CS161 Design and Analysis of Algorithms CS254 Complexity Theory (next year) CS354 Topics in Circuit Complexity For your favorite course
More information1 Reductions and Expressiveness
15-451/651: Design & Analysis of Algorithms November 3, 2015 Lecture #17 last changed: October 30, 2015 In the past few lectures we have looked at increasingly more expressive problems solvable using efficient
More informationLecture 6: Lies, Inner Product Spaces, and Symmetric Matrices
Math 108B Professor: Padraic Bartlett Lecture 6: Lies, Inner Product Spaces, and Symmetric Matrices Week 6 UCSB 2014 1 Lies Fun fact: I have deceived 1 you somewhat with these last few lectures! Let me
More informationLecture 20: conp and Friends, Oracles in Complexity Theory
6.045 Lecture 20: conp and Friends, Oracles in Complexity Theory 1 Definition: conp = { L L NP } What does a conp computation look like? In NP algorithms, we can use a guess instruction in pseudocode:
More informationNon-Interactive Zero Knowledge (II)
Non-Interactive Zero Knowledge (II) CS 601.442/642 Modern Cryptography Fall 2017 S 601.442/642 Modern CryptographyNon-Interactive Zero Knowledge (II) Fall 2017 1 / 18 NIZKs for NP: Roadmap Last-time: Transformation
More informationMath 3361-Modern Algebra Lecture 08 9/26/ Cardinality
Math 336-Modern Algebra Lecture 08 9/26/4. Cardinality I started talking about cardinality last time, and you did some stuff with it in the Homework, so let s continue. I said that two sets have the same
More information(x 1 +x 2 )(x 1 x 2 )+(x 2 +x 3 )(x 2 x 3 )+(x 3 +x 1 )(x 3 x 1 ).
CMPSCI611: Verifying Polynomial Identities Lecture 13 Here is a problem that has a polynomial-time randomized solution, but so far no poly-time deterministic solution. Let F be any field and let Q(x 1,...,
More informationProof Techniques (Review of Math 271)
Chapter 2 Proof Techniques (Review of Math 271) 2.1 Overview This chapter reviews proof techniques that were probably introduced in Math 271 and that may also have been used in a different way in Phil
More informationCS 6820 Fall 2014 Lectures, October 3-20, 2014
Analysis of Algorithms Linear Programming Notes CS 6820 Fall 2014 Lectures, October 3-20, 2014 1 Linear programming The linear programming (LP) problem is the following optimization problem. We are given
More information6.080 / Great Ideas in Theoretical Computer Science Spring 2008
MIT OpenCourseWare http://ocw.mit.edu 6.080 / 6.089 Great Ideas in Theoretical Computer Science Spring 2008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
More informationUmans Complexity Theory Lectures
Complexity Theory Umans Complexity Theory Lectures Lecture 1a: Problems and Languages Classify problems according to the computational resources required running time storage space parallelism randomness
More information1 Showing Recognizability
CSCC63 Worksheet Recognizability and Decidability 1 1 Showing Recognizability 1.1 An Example - take 1 Let Σ be an alphabet. L = { M M is a T M and L(M) }, i.e., that M accepts some string from Σ. Prove
More informationFinal Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2
Final Review Sheet The final will cover Sections Chapters 1,2,3 and 4, as well as sections 5.1-5.4, 6.1-6.2 and 7.1-7.3 from chapters 5,6 and 7. This is essentially all material covered this term. Watch
More information1 Maintaining a Dictionary
15-451/651: Design & Analysis of Algorithms February 1, 2016 Lecture #7: Hashing last changed: January 29, 2016 Hashing is a great practical tool, with an interesting and subtle theory too. In addition
More informationLecture 3: Latin Squares and Groups
Latin Squares Instructor: Padraic Bartlett Lecture 3: Latin Squares and Groups Week 2 Mathcamp 2012 In our last lecture, we came up with some fairly surprising connections between finite fields and Latin
More information: On the P vs. BPP problem. 30/12/2016 Lecture 11
03684155: On the P vs. BPP problem. 30/12/2016 Lecture 11 Promise problems Amnon Ta-Shma and Dean Doron 1 Definitions and examples In a promise problem, we are interested in solving a problem only on a
More informationPreliminaries and Complexity Theory
Preliminaries and Complexity Theory Oleksandr Romanko CAS 746 - Advanced Topics in Combinatorial Optimization McMaster University, January 16, 2006 Introduction Book structure: 2 Part I Linear Algebra
More informationRandomness. What next?
Randomness What next? Random Walk Attribution These slides were prepared for the New Jersey Governor s School course The Math Behind the Machine taught in the summer of 2012 by Grant Schoenebeck Large
More informationMath 138: Introduction to solving systems of equations with matrices. The Concept of Balance for Systems of Equations
Math 138: Introduction to solving systems of equations with matrices. Pedagogy focus: Concept of equation balance, integer arithmetic, quadratic equations. The Concept of Balance for Systems of Equations
More informationIntroduction to Machine Learning
Introduction to Machine Learning 236756 Prof. Nir Ailon Lecture 4: Computational Complexity of Learning & Surrogate Losses Efficient PAC Learning Until now we were mostly worried about sample complexity
More informationAnswers in blue. If you have questions or spot an error, let me know. 1. Find all matrices that commute with A =. 4 3
Answers in blue. If you have questions or spot an error, let me know. 3 4. Find all matrices that commute with A =. 4 3 a b If we set B = and set AB = BA, we see that 3a + 4b = 3a 4c, 4a + 3b = 3b 4d,
More informationTheory of Computer Science to Msc Students, Spring Lecture 2
Theory of Computer Science to Msc Students, Spring 2007 Lecture 2 Lecturer: Dorit Aharonov Scribe: Bar Shalem and Amitai Gilad Revised: Shahar Dobzinski, March 2007 1 BPP and NP The theory of computer
More informationHypothesis testing I. - In particular, we are talking about statistical hypotheses. [get everyone s finger length!] n =
Hypothesis testing I I. What is hypothesis testing? [Note we re temporarily bouncing around in the book a lot! Things will settle down again in a week or so] - Exactly what it says. We develop a hypothesis,
More informationc 1 v 1 + c 2 v 2 = 0 c 1 λ 1 v 1 + c 2 λ 1 v 2 = 0
LECTURE LECTURE 2 0. Distinct eigenvalues I haven t gotten around to stating the following important theorem: Theorem: A matrix with n distinct eigenvalues is diagonalizable. Proof (Sketch) Suppose n =
More informationLecture 6: Random Walks versus Independent Sampling
Spectral Graph Theory and Applications WS 011/01 Lecture 6: Random Walks versus Independent Sampling Lecturer: Thomas Sauerwald & He Sun For many problems it is necessary to draw samples from some distribution
More informationComplexity Theory VU , SS The Polynomial Hierarchy. Reinhard Pichler
Complexity Theory Complexity Theory VU 181.142, SS 2018 6. The Polynomial Hierarchy Reinhard Pichler Institut für Informationssysteme Arbeitsbereich DBAI Technische Universität Wien 15 May, 2018 Reinhard
More informationOutline. Complexity Theory EXACT TSP. The Class DP. Definition. Problem EXACT TSP. Complexity of EXACT TSP. Proposition VU 181.
Complexity Theory Complexity Theory Outline Complexity Theory VU 181.142, SS 2018 6. The Polynomial Hierarchy Reinhard Pichler Institut für Informationssysteme Arbeitsbereich DBAI Technische Universität
More informationJASS 06 Report Summary. Circuit Complexity. Konstantin S. Ushakov. May 14, 2006
JASS 06 Report Summary Circuit Complexity Konstantin S. Ushakov May 14, 2006 Abstract Computer science deals with many computational models. In real life we have normal computers that are constructed using,
More informationQuadratic Equations Part I
Quadratic Equations Part I Before proceeding with this section we should note that the topic of solving quadratic equations will be covered in two sections. This is done for the benefit of those viewing
More informationUndecidability. Andreas Klappenecker. [based on slides by Prof. Welch]
Undecidability Andreas Klappenecker [based on slides by Prof. Welch] 1 Sources Theory of Computing, A Gentle Introduction, by E. Kinber and C. Smith, Prentice-Hall, 2001 Automata Theory, Languages and
More informationMath 308 Midterm Answers and Comments July 18, Part A. Short answer questions
Math 308 Midterm Answers and Comments July 18, 2011 Part A. Short answer questions (1) Compute the determinant of the matrix a 3 3 1 1 2. 1 a 3 The determinant is 2a 2 12. Comments: Everyone seemed to
More informationLecture 2: From Classical to Quantum Model of Computation
CS 880: Quantum Information Processing 9/7/10 Lecture : From Classical to Quantum Model of Computation Instructor: Dieter van Melkebeek Scribe: Tyson Williams Last class we introduced two models for deterministic
More informationNotes for Lecture 3... x 4
Stanford University CS254: Computational Complexity Notes 3 Luca Trevisan January 14, 2014 Notes for Lecture 3 In this lecture we introduce the computational model of boolean circuits and prove that polynomial
More informationMath101, Sections 2 and 3, Spring 2008 Review Sheet for Exam #2:
Math101, Sections 2 and 3, Spring 2008 Review Sheet for Exam #2: 03 17 08 3 All about lines 3.1 The Rectangular Coordinate System Know how to plot points in the rectangular coordinate system. Know the
More informationCS264: Beyond Worst-Case Analysis Lecture #15: Topic Modeling and Nonnegative Matrix Factorization
CS264: Beyond Worst-Case Analysis Lecture #15: Topic Modeling and Nonnegative Matrix Factorization Tim Roughgarden February 28, 2017 1 Preamble This lecture fulfills a promise made back in Lecture #1,
More informationIntroduction to Advanced Results
Introduction to Advanced Results Master Informatique Université Paris 5 René Descartes 2016 Master Info. Complexity Advanced Results 1/26 Outline Boolean Hierarchy Probabilistic Complexity Parameterized
More informationNP-Complete problems
NP-Complete problems NP-complete problems (NPC): A subset of NP. If any NP-complete problem can be solved in polynomial time, then every problem in NP has a polynomial time solution. NP-complete languages
More information[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]
Math 43 Review Notes [Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty Dot Product If v (v, v, v 3 and w (w, w, w 3, then the
More information/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: NP-Completeness I Date: 11/13/18
601.433/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: NP-Completeness I Date: 11/13/18 20.1 Introduction Definition 20.1.1 We say that an algorithm runs in polynomial time if its running
More information1 Distributional problems
CSCI 5170: Computational Complexity Lecture 6 The Chinese University of Hong Kong, Spring 2016 23 February 2016 The theory of NP-completeness has been applied to explain why brute-force search is essentially
More informationQuestions Pool. Amnon Ta-Shma and Dean Doron. January 2, Make sure you know how to solve. Do not submit.
Questions Pool Amnon Ta-Shma and Dean Doron January 2, 2017 General guidelines The questions fall into several categories: (Know). (Mandatory). (Bonus). Make sure you know how to solve. Do not submit.
More informationLecture 4. 1 Circuit Complexity. Notes on Complexity Theory: Fall 2005 Last updated: September, Jonathan Katz
Notes on Complexity Theory: Fall 2005 Last updated: September, 2005 Jonathan Katz Lecture 4 1 Circuit Complexity Circuits are directed, acyclic graphs where nodes are called gates and edges are called
More informationCSCI 1590 Intro to Computational Complexity
CSCI 1590 Intro to Computational Complexity Randomized Computation John E. Savage Brown University April 15, 2009 John E. Savage (Brown University) CSCI 1590 Intro to Computational Complexity April 15,
More informationCSC321 Lecture 16: ResNets and Attention
CSC321 Lecture 16: ResNets and Attention Roger Grosse Roger Grosse CSC321 Lecture 16: ResNets and Attention 1 / 24 Overview Two topics for today: Topic 1: Deep Residual Networks (ResNets) This is the state-of-the
More informationRandomness and non-uniformity
Randomness and non-uniformity Felix Weninger April 2006 Abstract In the first part, we introduce randomized algorithms as a new notion of efficient algorithms for decision problems. We classify randomized
More informationCSCI3390-Assignment 2 Solutions
CSCI3390-Assignment 2 Solutions due February 3, 2016 1 TMs for Deciding Languages Write the specification of a Turing machine recognizing one of the following three languages. Do one of these problems.
More information6.045J/18.400J: Automata, Computability and Complexity Final Exam. There are two sheets of scratch paper at the end of this exam.
6.045J/18.400J: Automata, Computability and Complexity May 20, 2005 6.045 Final Exam Prof. Nancy Lynch Name: Please write your name on each page. This exam is open book, open notes. There are two sheets
More informationCS 301. Lecture 18 Decidable languages. Stephen Checkoway. April 2, 2018
CS 301 Lecture 18 Decidable languages Stephen Checkoway April 2, 2018 1 / 26 Decidable language Recall, a language A is decidable if there is some TM M that 1 recognizes A (i.e., L(M) = A), and 2 halts
More informationLecture 7: Positive Semidefinite Matrices
Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.
More informationLecture 4: Constructing the Integers, Rationals and Reals
Math/CS 20: Intro. to Math Professor: Padraic Bartlett Lecture 4: Constructing the Integers, Rationals and Reals Week 5 UCSB 204 The Integers Normally, using the natural numbers, you can easily define
More information1 Acceptance, Rejection, and I/O for Turing Machines
1 Acceptance, Rejection, and I/O for Turing Machines Definition 1.1 (Initial Configuration) If M = (K,Σ,δ,s,H) is a Turing machine and w (Σ {, }) then the initial configuration of M on input w is (s, w).
More informationPr[C = c M = m] = Pr[C = c] Pr[M = m] Pr[M = m C = c] = Pr[M = m]
Midterm Review Sheet The definition of a private-key encryption scheme. It s a tuple Π = ((K n,m n,c n ) n=1,gen,enc,dec) where - for each n N, K n,m n,c n are sets of bitstrings; [for a given value of
More information