Lecture 7: ɛ-biased and almost k-wise independent spaces
|
|
- Charleen Higgins
- 6 years ago
- Views:
Transcription
1 Lecture 7: ɛ-biased and almost k-wise independent spaces Topics in Complexity Theory and Pseudorandomness (pring 203) Rutgers University wastik Kopparty cribes: Ben Lund, Tim Naumovitz Today we will see ɛ-biased spaces, almost k-wise independent spaces, and some applications. Distributions Let µ be a distribution on {0, } n. Lemma. µ is uniformly distributed on {0, } n iff for each nonempty [n] (where the addition is mod 2). Proof. Let χ : {0, } n R where Pr x µ [ x i = ] = 2 Thus: χ (x) = { if x i = 0 if x i =. χ (x) = ( ) x i. We can treat µ as a real valued function µ: {0, } n R with µ(x) 0 for all x {0, } n and n µ(x) =. x {0,} For nonempty, consider the inner product: µ, χ = x {0,} n µ(x)χ (x) = x {0,} n,χ (x)= µ(x) x {0,} n,χ (x)=0 µ(x) = Pr [ x i = 0] Pr [ x i = ] = 0 (by our hypothesis) x µ x µ Aside: Characters and Fourier analysis The χ are called the characters of the group (Z n 2, +). We have the following important properties:
2 . χ (x + y) = χ (x) χ (y) 2. For any nonempty, we have x {0,} n χ (x) = 0. This follows from the following calculation. Pick any i. Then: χ (x) = χ (x + e i ) = χ (x), x {0,} n x {0,} n x {0,} n where e i is the 0- vector with only in the ith coordinate. 3. Furthermore, χ (x) χ T (x) = ( ) x i+ i T x i = ( ) T = χ T (x). ometimes we will also treat, T as their characteristic vectors in Z n 2, and in this notation, χ (x) χ T (x) = χ +T (x). 4. χ, χ T = x {0,} n χ (x)χ T (x) = { x {0,} n χ 0 if T 0 T (x) = 2 n if = T. This means that {x } are an orthogonal system, and since there are 2 n of them, they are a basis for R {0,}n. Let ˆµ() = µ, χ : the th Fourier coefficient of µ. End Aside Proof (continued) We have ˆµ() = 0 for each nonempty. Thus µ = α χ for some α, and since µ is a distribution, we have µ = 2 n χ, and this is the uniform distribution. Lemma 2. uppose that for all nonempty [n], [ ] x i = (( ɛ)/2, ( + ɛ)/2). Pr x µ Then µ is ɛ close to the uniform distribution in L 2, and thus µ is ɛ2 n/2 close the the uniform distribution in L and µ is ɛ close the the uniform distribution in L. Proof. The hypothesis gives us that ˆµ() ɛ We also have ˆµ( ) = µ, χ = Aside: Parseval s Identity Lemma 3. Take any f : {0, } n R. We have Then f = 2 n f, χ χ = ˆf() 2 n χ. x f(x) 2 = 2 n ˆf() 2 2
3 Proof. This follows from the orthogonality of the χ : f(x) 2 = f, f x = 4 n = 4 n,t = 2 n ˆf()χ, T ˆf(T )χ T ˆf() ˆf(T ) χ, χ T ˆf() 2. End Aside Proof (cont d): Let U = χ 2 be the uniform distribution. n We have Û( ) =, and Û() = 0 for all nonempty. Thus µ U (which is what we want to show is small in the L 2 norm), has the following Fourier coefficients: { 0 if = µ U() = ˆµ() otherwise Now using the Parseval identity, we get that µ U 2 2 = 2 n ˆµ() 2 2 n ɛ2 (2 n ) ɛ 2. This completes the proof, and the result for L and L follow from Cauchy-chwarz and trivially. Thus if a distribution fools linear functions really well, it is almost uniform.. Notes on the L distance There are many distances between probability distributions. But the L distance has special status when we are interested in pseudorandomness. Lemma 4 (Data processing inequality). uppose µ, ν are distributions over the same domain D. Let f be a function defined on D. Pick x µ, y ν. f(x) f(y) L µ ν L. This is closely related to the following simple characterization of L distance: 2 µ ν L ɛ if and only if there exists a distinguishing test T : D {0, } such that T (µ) T (ν) L = Pr [T (x) = ] Pr [T (x) = ] ɛ. x µ x ν ketch of Proof: Graph the distributions µ and ν over their domain (They have the same domain). 2 µ ν L is the area between the curves where µ is above ν. Choose T to output on sets where µ > ν. This gives us Pr x µ [D(x) = ] Pr x ν [D(x) = ] is the area between µ and ν on these sets, which is 2 µ ν ɛ. 3
4 Relationship to pseudorandom generators simple µ s.t. small circuits C, When we studied PRGs, the goal was to find a Pr [C(x) = ] Pr [C(x) = ] ɛ. x µ x U The only difference between this condition and the condition for small L distance is the complexity constraint that C is small. THI MAKE ALL THE DIFFERENCE IN THE WORLD! By the above discussion, we could try to show that µ is a PRG by showing the stronger condition that µ is ɛ-close to the uniform distribution in L. This strengthening ruins the approach: there cannot be a µ which is generated using a small seed that is close to the uniform distribution in L distance: µ U L ɛ support(µ) ( ɛ) 2 n. Try to show this. 2 ɛ-biased distributions and k-wise independence Definition 5. µ is ɛ-biased if for all nonempty [n], Pr[ x i = ] [( ɛ)/2, ( + ɛ)/2]. Note:. µ is 0-biased µ is uniform. 2. ɛ-biased µ is (ɛ, ɛ2 n/2 )-close to uniform in (L 2, L ). 3. ɛ-biased µ is ɛ-close to uniform in L. When is µ k-wise independent? µ is k-wise independent k, we have ˆµ() = 0. Proof: ( ) Take such an. ince µ is k-wise independent, we have Pr x µ [ x i = ] = 2. By definition of ˆµ, this yields ˆµ() = 0. ( ) Let be as stated. Look at µ. T, T, we know that Pr x µ [ x i = ] = 2, so ˆ µ (T ) = 0, meaning µ is uniform. This implies that µ is k-wise independent. When is δ-almost µ k-wise independent? uppose [n], k, we have ˆµ() ɛ, then µ is δ-almost k-wise independent in L for δ = ɛ2 k/2. Take any [n] = k. Look at ν = µ, a distribution on {0, }. T, since ˆν(T ) = ˆµ(T ), we have ˆν(T ) ɛ. This implies that ν is ɛ 2 k/2 close to uniform on {0, }. 4
5 How much randomness is needed to generate these spaces? For k-wise independence: we saw that k(log n) bits suffices. In fact, Ω(k log n) bits are needed (you will prove this in the homework). For ɛ-biased spaces: First let us show that there exist simple ɛ-biased spaces; we will later see how to get them explicitly. We try to get an ɛ-biased µ which is uniform on some K {0, } n with K small. Then log K bits suffice to generate a sample from µ. We use the probabilistic method: Choose K at random as follows: pick y,..., y m in {0, } n uniformly. We want that for all linear functions h : Z n 2 Z 2, Fix h. Then: Pr [h(y i) = ] (( ɛ)/2, ( + ɛ)/2). i [m] Pr [y,..., y m are bad for h ] e Ω(ɛ2m), y,...,y m by a Chernoff bound (This is because for a random y Z n 2, Pr[h(y) = ] = 2 ). We then union bound over all h to get Pr [ h : y,..., y m are bad for h] 2 n e Ω(ɛ2m). y,...,y m Now choose m = O( n ), so that this is less than. We thus get an ɛ-biased space which can ɛ 2 be generated using log n + 2 log ɛ + O() bits of true randomness. It turns out that this seed length is near optimal. Later this class we will explicitly construct ɛ-biased spaces with seed length O(log n + log ɛ ). For δ-almost k-wise independent spaces: We know that ɛ-biased spaces are automatically δ-almost k-wise independent for some δ. It turns out that this already gives us almost k-wise independent spaces using smaller seed length than what is needed for pure k-wise independence. Indeed, if we take ɛ = 2 k/2 δ and take an explicit ɛ biased space as mentioned above, then this is δ-almost k-wise independent, and has a seed length of O(log n + log ɛ ) = O(log n + k + log δ ). 3 Efficient construction of δ-almost k-wise independent spaces One way to get k-wise independence is to multiply a random seed y by a matrix M: y y T M. In this construction, y is a vector with O(k log n) elements chosen uniformly at random, and M has n columns. Each element of y T M is y, a i, where a i is a column of M. Claim: The output y T M will be k-wise indpendent if and only if every k rows of M are indepenent. Proof: Let [n]. uppose a i, y is not uniformly distributed. This is equivalent to y, a i is not uniformly distributed. But, this implies that a i = 0, since y, b is uniform for any fixed b 0. 5
6 Claim: If, instead of taking y from a uniformly random distribution, we take y from an ɛ-biased distribution, y T M will still be ɛ2 k/2 -almost k-wise independent. Proof: uppose not; then there exists [n],, k such that bias ( a i, y ) ɛ. ince each set of k columns of M are independent, we know that a i 0. In addition, y is chosen from an ɛ-biased space. o, we ve reached a contradiction. How many bits of randomness will we need to generate an n bit sample from a δ-almost k-wise independent space using this procedure? We need a δ2 k/2 -biased sample of length O(k log n). ince log m + 2 log( ɛ ) bits of uniform randomness are needed for m bits of ɛ-biased randomness, we need O(log k + log log n) + O(log( δ ) + k) = O(k + log log n + log( δ )) total bits of randomness. 4 Applications of δ-almost k-wise independent distribution 4. k-universal sets A k universal set {0, } n has the property that the projection of onto any k indexes contains all 2 k possible patterns. We can use δ-almost k-wise independent distribution to construct k- universal sets. If δ = 0, then any δ-almost k-wise independent distribution has k-universal 2 k support. The size of the k-universal set we get out of this is 2 O(k+log log n+log( δ )) = ( 2 k log n ) O(), and is nearly optimal. (Note the surprisingly tiny dependence on n!) 4.2 Ramsey graphs Pick the edges (x ij ) i<j {0, } (n 2) from a δ-almost k-wise independent space; we can interpret x ij = as an edge between vertex i and vertex j, and x ij = 0 as the absence of an edge. Fix [n], = k. By the data processing inequality, Taking a union bound, Pr[x ij are all 0 or all for all i, j ] 2 2 (k 2) + δ. Pr[, = k, such that is a clique or independent set] ( ) n (2 2 (k 2) + δ) k By setting δ = 2 (k 2) and n = 2 k/0 in the above inequality, we ensure that the probability that there is a clique or independent set of size k is less than. Thus, we ve described an explicit family of 2 O(log2 n) graphs on n vertexes, at least one of which is O(log n) Ramsey. As a corollary, we can construct an O(log n) Ramsey graph in 2 O(log2 n) time. 6
7 5 Construction of ɛ-biased spaces 5. Finite extension field review The construction described here will use the finite field F 2 n. This is an n-dimensional vector space over F 2. Addition is the same as for F n 2 ; multiplication is a bilinear map. A polynomial P (x) F 2 n[x] of degree d has at most d roots. 5.2 Properties needed from an ɛ-biased space uppose y... y m is an ɛ-biased space; let G be the n by m matrix with columns y... y m. Pick x {0, } n. Consider x T G. If x 0, it must have (( ɛ)/2, ( + ɛ)/2) fraction of s. Pick any x, y {0, } n, and consider the Hamming distance (x T G, y T G); this is the number of s in x T G y T G = (x y) T G, which is in the range (( ɛ)/2, ( + ɛ)/2). Thus, the image of x T G is a linear space in {0, } m of dimension n, such that any two vectors in the space have distance of 2 ± ɛ/ Construction and proof of correctness A point from the ɛ-biased space is calculated from two uniformly chosen elements α, β F 2 n. The point is calculated as (α, β) [, β, α, β,... α N, β ]. where, is the inner product over F n 2, when elements of F 2 n F 2. are represented in some basis over If N is set to ɛ2 n, then this construction needs 2 log(n/ɛ) = 2 log N + 2 log( ɛ ) bits of randomness. We need to show that the sample space described is ɛ-biased. Pick (α, β) uniformly from F 2 n, and take any [N]. We need to show that [ ] α i, β = 0 (( ɛ)/2, ( + ɛ)/2). Pr α,β We have αi β = αi, β. There are two cases to [ consider; either α is a root of P (x) = xi, or it is not. If α is not a root of P (x), then Pr β αi, β = 0 ] = 2. If α is a root of P (x),then certainly αi, β = 0. However, there are at most N roots of P (x), so the probability that α is a root is at most N 2 = ɛ. Thus, the space is ɛ-biased. n 7
Lecture 3 Small bias with respect to linear tests
03683170: Expanders, Pseudorandomness and Derandomization 3/04/16 Lecture 3 Small bias with respect to linear tests Amnon Ta-Shma and Dean Doron 1 The Fourier expansion 1.1 Over general domains Let G be
More informationLecture 8: Linearity and Assignment Testing
CE 533: The PCP Theorem and Hardness of Approximation (Autumn 2005) Lecture 8: Linearity and Assignment Testing 26 October 2005 Lecturer: Venkat Guruswami cribe: Paul Pham & Venkat Guruswami 1 Recap In
More informationCS Foundations of Communication Complexity
CS 49 - Foundations of Communication Complexity Lecturer: Toniann Pitassi 1 The Discrepancy Method Cont d In the previous lecture we ve outlined the discrepancy method, which is a method for getting lower
More informationLecture 3: AC 0, the switching lemma
Lecture 3: AC 0, the switching lemma Topics in Complexity Theory and Pseudorandomness (Spring 2013) Rutgers University Swastik Kopparty Scribes: Meng Li, Abdul Basit 1 Pseudorandom sets We start by proving
More information2 Completing the Hardness of approximation of Set Cover
CSE 533: The PCP Theorem and Hardness of Approximation (Autumn 2005) Lecture 15: Set Cover hardness and testing Long Codes Nov. 21, 2005 Lecturer: Venkat Guruswami Scribe: Atri Rudra 1 Recap We will first
More informationLecture 2: Minimax theorem, Impagliazzo Hard Core Lemma
Lecture 2: Minimax theorem, Impagliazzo Hard Core Lemma Topics in Pseudorandomness and Complexity Theory (Spring 207) Rutgers University Swastik Kopparty Scribe: Cole Franks Zero-sum games are two player
More informationLecture 5: February 16, 2012
COMS 6253: Advanced Computational Learning Theory Lecturer: Rocco Servedio Lecture 5: February 16, 2012 Spring 2012 Scribe: Igor Carboni Oliveira 1 Last time and today Previously: Finished first unit on
More information15-855: Intensive Intro to Complexity Theory Spring Lecture 16: Nisan s PRG for small space
15-855: Intensive Intro to Complexity Theory Spring 2009 Lecture 16: Nisan s PRG for small space For the next few lectures we will study derandomization. In algorithms classes one often sees clever randomized
More information6.842 Randomness and Computation Lecture 5
6.842 Randomness and Computation 2012-02-22 Lecture 5 Lecturer: Ronitt Rubinfeld Scribe: Michael Forbes 1 Overview Today we will define the notion of a pairwise independent hash function, and discuss its
More informationLecture 4: LMN Learning (Part 2)
CS 294-114 Fine-Grained Compleity and Algorithms Sept 8, 2015 Lecture 4: LMN Learning (Part 2) Instructor: Russell Impagliazzo Scribe: Preetum Nakkiran 1 Overview Continuing from last lecture, we will
More informationFinite fields, randomness and complexity. Swastik Kopparty Rutgers University
Finite fields, randomness and complexity Swastik Kopparty Rutgers University This talk Three great problems: Polynomial factorization Epsilon-biased sets Function uncorrelated with low-degree polynomials
More informationCS 6815: Lecture 4. September 4, 2018
XS = X s... X st CS 685: Lecture 4 Instructor: Eshan Chattopadhyay Scribe: Makis Arsenis and Ayush Sekhari September 4, 208 In this lecture, we will first see an algorithm to construct ɛ-biased spaces.
More informationLecture 22. m n c (k) i,j x i x j = c (k) k=1
Notes on Complexity Theory Last updated: June, 2014 Jonathan Katz Lecture 22 1 N P PCP(poly, 1) We show here a probabilistically checkable proof for N P in which the verifier reads only a constant number
More information1 Lecture 6-7, Scribe: Willy Quach
Special Topics in Complexity Theory, Fall 2017. Instructor: Emanuele Viola 1 Lecture 6-7, Scribe: Willy Quach In these lectures, we introduce k-wise indistinguishability and link this notion to the approximate
More information1 Nisan-Wigderson pseudorandom generator
CSG399: Gems of Theoretical Computer Science. Lecture 3. Jan. 6, 2009. Instructor: Emanuele Viola Scribe: Dimitrios Kanoulas Nisan-Wigderson pseudorandom generator and design constuction Nisan-Wigderson
More informationComplexity Theory of Polynomial-Time Problems
Complexity Theory of Polynomial-Time Problems Lecture 3: The polynomial method Part I: Orthogonal Vectors Sebastian Krinninger Organization of lecture No lecture on 26.05. (State holiday) 2 nd exercise
More informationPRGs for space-bounded computation: INW, Nisan
0368-4283: Space-Bounded Computation 15/5/2018 Lecture 9 PRGs for space-bounded computation: INW, Nisan Amnon Ta-Shma and Dean Doron 1 PRGs Definition 1. Let C be a collection of functions C : Σ n {0,
More informationApproximating MAX-E3LIN is NP-Hard
Approximating MAX-E3LIN is NP-Hard Evan Chen May 4, 2016 This lecture focuses on the MAX-E3LIN problem. We prove that approximating it is NP-hard by a reduction from LABEL-COVER. 1 Introducing MAX-E3LIN
More informationFourier Analysis. 1 Fourier basics. 1.1 Examples. 1.2 Characters form an orthonormal basis
Fourier Analysis Topics in Finite Fields (Fall 203) Rutgers University Swastik Kopparty Last modified: Friday 8 th October, 203 Fourier basics Let G be a finite abelian group. A character of G is simply
More informationU.C. Berkeley Better-than-Worst-Case Analysis Handout 3 Luca Trevisan May 24, 2018
U.C. Berkeley Better-than-Worst-Case Analysis Handout 3 Luca Trevisan May 24, 2018 Lecture 3 In which we show how to find a planted clique in a random graph. 1 Finding a Planted Clique We will analyze
More informationCPSC 536N: Randomized Algorithms Term 2. Lecture 9
CPSC 536N: Randomized Algorithms 2011-12 Term 2 Prof. Nick Harvey Lecture 9 University of British Columbia 1 Polynomial Identity Testing In the first lecture we discussed the problem of testing equality
More informationNotes for Lecture 15
U.C. Berkeley CS278: Computational Complexity Handout N15 Professor Luca Trevisan 10/27/2004 Notes for Lecture 15 Notes written 12/07/04 Learning Decision Trees In these notes it will be convenient to
More informationInteger Linear Programs
Lecture 2: Review, Linear Programming Relaxations Today we will talk about expressing combinatorial problems as mathematical programs, specifically Integer Linear Programs (ILPs). We then see what happens
More informationLectures 15: Cayley Graphs of Abelian Groups
U.C. Berkeley CS294: Spectral Methods and Expanders Handout 15 Luca Trevisan March 14, 2016 Lectures 15: Cayley Graphs of Abelian Groups In which we show how to find the eigenvalues and eigenvectors of
More information6.842 Randomness and Computation March 3, Lecture 8
6.84 Randomness and Computation March 3, 04 Lecture 8 Lecturer: Ronitt Rubinfeld Scribe: Daniel Grier Useful Linear Algebra Let v = (v, v,..., v n ) be a non-zero n-dimensional row vector and P an n n
More information1 Randomized Computation
CS 6743 Lecture 17 1 Fall 2007 1 Randomized Computation Why is randomness useful? Imagine you have a stack of bank notes, with very few counterfeit ones. You want to choose a genuine bank note to pay at
More informationHomework 11. Solutions
Homework 11. Solutions Problem 2.3.2. Let f n : R R be 1/n times the characteristic function of the interval (0, n). Show that f n 0 uniformly and f n µ L = 1. Why isn t it a counterexample to the Lebesgue
More informationVectors in Function Spaces
Jim Lambers MAT 66 Spring Semester 15-16 Lecture 18 Notes These notes correspond to Section 6.3 in the text. Vectors in Function Spaces We begin with some necessary terminology. A vector space V, also
More informationLecture 9 - One Way Permutations
Lecture 9 - One Way Permutations Boaz Barak October 17, 2007 From time immemorial, humanity has gotten frequent, often cruel, reminders that many things are easier to do than to reverse. Leonid Levin Quick
More informationMath General Topology Fall 2012 Homework 1 Solutions
Math 535 - General Topology Fall 2012 Homework 1 Solutions Definition. Let V be a (real or complex) vector space. A norm on V is a function : V R satisfying: 1. Positivity: x 0 for all x V and moreover
More informationNon-Interactive Zero Knowledge (II)
Non-Interactive Zero Knowledge (II) CS 601.442/642 Modern Cryptography Fall 2017 S 601.442/642 Modern CryptographyNon-Interactive Zero Knowledge (II) Fall 2017 1 / 18 NIZKs for NP: Roadmap Last-time: Transformation
More informationProblem Set 2. Assigned: Mon. November. 23, 2015
Pseudorandomness Prof. Salil Vadhan Problem Set 2 Assigned: Mon. November. 23, 2015 Chi-Ning Chou Index Problem Progress 1 SchwartzZippel lemma 1/1 2 Robustness of the model 1/1 3 Zero error versus 1-sided
More informationNotes for Lecture 14 v0.9
U.C. Berkeley CS27: Computational Complexity Handout N14 v0.9 Professor Luca Trevisan 10/25/2002 Notes for Lecture 14 v0.9 These notes are in a draft version. Please give me any comments you may have,
More information8.1 Concentration inequality for Gaussian random matrix (cont d)
MGMT 69: Topics in High-dimensional Data Analysis Falll 26 Lecture 8: Spectral clustering and Laplacian matrices Lecturer: Jiaming Xu Scribe: Hyun-Ju Oh and Taotao He, October 4, 26 Outline Concentration
More informationThe sum of d small-bias generators fools polynomials of degree d
The sum of d small-bias generators fools polynomials of degree d Emanuele Viola April 9, 2008 Abstract We prove that the sum of d small-bias generators L : F s F n fools degree-d polynomials in n variables
More informationAustin Mohr Math 704 Homework 6
Austin Mohr Math 704 Homework 6 Problem 1 Integrability of f on R does not necessarily imply the convergence of f(x) to 0 as x. a. There exists a positive continuous function f on R so that f is integrable
More informationLecture 5: Two-point Sampling
Randomized Algorithms Lecture 5: Two-point Sampling Sotiris Nikoletseas Professor CEID - ETY Course 2017-2018 Sotiris Nikoletseas, Professor Randomized Algorithms - Lecture 5 1 / 26 Overview A. Pairwise
More informationContinuous Functions on Metric Spaces
Continuous Functions on Metric Spaces Math 201A, Fall 2016 1 Continuous functions Definition 1. Let (X, d X ) and (Y, d Y ) be metric spaces. A function f : X Y is continuous at a X if for every ɛ > 0
More informationAPPROXIMATION RESISTANCE AND LINEAR THRESHOLD FUNCTIONS
APPROXIMATION RESISTANCE AND LINEAR THRESHOLD FUNCTIONS RIDWAN SYED Abstract. In the boolean Max k CSP (f) problem we are given a predicate f : { 1, 1} k {0, 1}, a set of variables, and local constraints
More informationLecture 3: Boolean Functions and the Walsh Hadamard Code 1
CS-E4550 Advanced Combinatorics in Computer Science Autumn 2016: Selected Topics in Complexity Theory Lecture 3: Boolean Functions and the Walsh Hadamard Code 1 1. Introduction The present lecture develops
More information1 Topology Definition of a topology Basis (Base) of a topology The subspace topology & the product topology on X Y 3
Index Page 1 Topology 2 1.1 Definition of a topology 2 1.2 Basis (Base) of a topology 2 1.3 The subspace topology & the product topology on X Y 3 1.4 Basic topology concepts: limit points, closed sets,
More informationLecture 5: Derandomization (Part II)
CS369E: Expanders May 1, 005 Lecture 5: Derandomization (Part II) Lecturer: Prahladh Harsha Scribe: Adam Barth Today we will use expanders to derandomize the algorithm for linearity test. Before presenting
More information(1) Consider the space S consisting of all continuous real-valued functions on the closed interval [0, 1]. For f, g S, define
Homework, Real Analysis I, Fall, 2010. (1) Consider the space S consisting of all continuous real-valued functions on the closed interval [0, 1]. For f, g S, define ρ(f, g) = 1 0 f(x) g(x) dx. Show that
More informationPseudorandomness for permutation and regular branching programs
Pseudorandomness for permutation and regular branching programs Anindya De Computer Science Division University of California, Berkeley Berkeley, CA 94720, USA anindya@cs.berkeley.edu Abstract In this
More informationLecture 8 (Notes) 1. The book Computational Complexity: A Modern Approach by Sanjeev Arora and Boaz Barak;
Topics in Theoretical Computer Science April 18, 2016 Lecturer: Ola Svensson Lecture 8 (Notes) Scribes: Ola Svensson Disclaimer: These notes were written for the lecturer only and may contain inconsistent
More informationVector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms.
Vector Spaces Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. For each two vectors a, b ν there exists a summation procedure: a +
More informationINTRODUCTION TO REAL ANALYSIS II MATH 4332 BLECHER NOTES
INTRODUCTION TO REAL ANALYSIS II MATH 433 BLECHER NOTES. As in earlier classnotes. As in earlier classnotes (Fourier series) 3. Fourier series (continued) (NOTE: UNDERGRADS IN THE CLASS ARE NOT RESPONSIBLE
More informationMATH 51H Section 4. October 16, Recall what it means for a function between metric spaces to be continuous:
MATH 51H Section 4 October 16, 2015 1 Continuity Recall what it means for a function between metric spaces to be continuous: Definition. Let (X, d X ), (Y, d Y ) be metric spaces. A function f : X Y is
More informationLecture 5: Probabilistic tools and Applications II
T-79.7003: Graphs and Networks Fall 2013 Lecture 5: Probabilistic tools and Applications II Lecturer: Charalampos E. Tsourakakis Oct. 11, 2013 5.1 Overview In the first part of today s lecture we will
More informationLinear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space
Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................
More informationPseudorandomness for permutation and regular branching programs
Pseudorandomness for permutation and regular branching programs Anindya De March 6, 2013 Abstract In this paper, we prove the following two results about the INW pseudorandom generator It fools constant
More informationImproved Pseudorandom generators for depth 2 circuits
Improved Pseudorandom generators for depth 2 circuits Anindya De Omid Etesami Luca Trevisan Madhur Tulsiani April 20, 2010 Abstract We prove the existence of a poly(n, m-time computable pseudorandom generator
More informationNotes on Complexity Theory Last updated: December, Lecture 27
Notes on Complexity Theory Last updated: December, 2011 Jonathan Katz Lecture 27 1 Space-Bounded Derandomization We now discuss derandomization of space-bounded algorithms. Here non-trivial results can
More informationLecture 8 : Eigenvalues and Eigenvectors
CPS290: Algorithmic Foundations of Data Science February 24, 2017 Lecture 8 : Eigenvalues and Eigenvectors Lecturer: Kamesh Munagala Scribe: Kamesh Munagala Hermitian Matrices It is simpler to begin with
More informationLecture 29: Computational Learning Theory
CS 710: Complexity Theory 5/4/2010 Lecture 29: Computational Learning Theory Instructor: Dieter van Melkebeek Scribe: Dmitri Svetlov and Jake Rosin Today we will provide a brief introduction to computational
More informationCS151 Complexity Theory. Lecture 14 May 17, 2017
CS151 Complexity Theory Lecture 14 May 17, 2017 IP = PSPACE Theorem: (Shamir) IP = PSPACE Note: IP PSPACE enumerate all possible interactions, explicitly calculate acceptance probability interaction extremely
More informationLecture 6: Deterministic Primality Testing
Lecture 6: Deterministic Primality Testing Topics in Pseudorandomness and Complexity (Spring 018) Rutgers University Swastik Kopparty Scribe: Justin Semonsen, Nikolas Melissaris 1 Introduction The AKS
More informationFigure 1: Circuit for Simon s Algorithm. The above circuit corresponds to the following sequence of transformations.
CS 94 //09 Fourier Transform, Period Finding and Factoring in BQP Spring 009 Lecture 4 Recap: Simon s Algorithm Recall that in the Simon s problem, we are given a function f : Z n Zn (i.e. from n-bit strings
More informationNotions such as convergent sequence and Cauchy sequence make sense for any metric space. Convergent Sequences are Cauchy
Banach Spaces These notes provide an introduction to Banach spaces, which are complete normed vector spaces. For the purposes of these notes, all vector spaces are assumed to be over the real numbers.
More informationCSE 291: Fourier analysis Chapter 2: Social choice theory
CSE 91: Fourier analysis Chapter : Social choice theory 1 Basic definitions We can view a boolean function f : { 1, 1} n { 1, 1} as a means to aggregate votes in a -outcome election. Common examples are:
More informationOptimal Hitting Sets for Combinatorial Shapes
Optimal Hitting Sets for Combinatorial Shapes Aditya Bhaskara Devendra Desai Srikanth Srinivasan November 5, 2012 Abstract We consider the problem of constructing explicit Hitting sets for Combinatorial
More informationMath 328 Course Notes
Math 328 Course Notes Ian Robertson March 3, 2006 3 Properties of C[0, 1]: Sup-norm and Completeness In this chapter we are going to examine the vector space of all continuous functions defined on the
More informationAverage Case Complexity: Levin s Theory
Chapter 15 Average Case Complexity: Levin s Theory 1 needs more work Our study of complexity - NP-completeness, #P-completeness etc. thus far only concerned worst-case complexity. However, algorithms designers
More informationLecture 23: Alternation vs. Counting
CS 710: Complexity Theory 4/13/010 Lecture 3: Alternation vs. Counting Instructor: Dieter van Melkebeek Scribe: Jeff Kinne & Mushfeq Khan We introduced counting complexity classes in the previous lecture
More informationLecture 2: Basics of Harmonic Analysis. 1 Structural properties of Boolean functions (continued)
CS 880: Advanced Complexity Theory 1/25/2008 Lecture 2: Basics of Harmonic Analysis Instructor: Dieter van Melkebeek Scribe: Priyananda Shenoy In the last lecture, we mentioned one structural property
More informationLecture 13: Private Key Encryption
COM S 687 Introduction to Cryptography October 05, 2006 Instructor: Rafael Pass Lecture 13: Private Key Encryption Scribe: Ashwin Machanavajjhala Till this point in the course we have learnt how to define
More informationLearning and Fourier Analysis
Learning and Fourier Analysis Grigory Yaroslavtsev http://grigory.us Slides at http://grigory.us/cis625/lecture2.pdf CIS 625: Computational Learning Theory Fourier Analysis and Learning Powerful tool for
More informationConstructing Approximations to Functions
Constructing Approximations to Functions Given a function, f, if is often useful to it is often useful to approximate it by nicer functions. For example give a continuous function, f, it can be useful
More informationTopics in Probability and Statistics
Topics in Probability and tatistics A Fundamental Construction uppose {, P } is a sample space (with probability P), and suppose X : R is a random variable. The distribution of X is the probability P X
More informationLinear Analysis Lecture 5
Linear Analysis Lecture 5 Inner Products and V Let dim V < with inner product,. Choose a basis B and let v, w V have coordinates in F n given by x 1. x n and y 1. y n, respectively. Let A F n n be the
More informationLecture 5. 1 Review (Pairwise Independence and Derandomization)
6.842 Randomness and Computation September 20, 2017 Lecture 5 Lecturer: Ronitt Rubinfeld Scribe: Tom Kolokotrones 1 Review (Pairwise Independence and Derandomization) As we discussed last time, we can
More informationCS261: A Second Course in Algorithms Lecture #18: Five Essential Tools for the Analysis of Randomized Algorithms
CS261: A Second Course in Algorithms Lecture #18: Five Essential Tools for the Analysis of Randomized Algorithms Tim Roughgarden March 3, 2016 1 Preamble In CS109 and CS161, you learned some tricks of
More information11.1 Set Cover ILP formulation of set cover Deterministic rounding
CS787: Advanced Algorithms Lecture 11: Randomized Rounding, Concentration Bounds In this lecture we will see some more examples of approximation algorithms based on LP relaxations. This time we will use
More informationHarmonic Analysis on the Cube and Parseval s Identity
Lecture 3 Harmonic Analysis on the Cube and Parseval s Identity Jan 28, 2005 Lecturer: Nati Linial Notes: Pete Couperus and Neva Cherniavsky 3. Where we can use this During the past weeks, we developed
More informationIITM-CS6845: Theory Toolkit February 3, 2012
IITM-CS6845: Theory Toolkit February 3, 2012 Lecture 4 : Derandomizing the logspace algorithm for s-t connectivity Lecturer: N S Narayanaswamy Scribe: Mrinal Kumar Lecture Plan:In this lecture, we will
More information,PSURYHG&RQVWUXFWLRQVIRU([WUDFWLQJ4XDVL5DQGRP%LWV IURP6RXUFHVRI:HDN5DQGRPQHVV
:HL]PDQQ,QVWLWXWHRI6FLHQFH 7KHVLVIRUWKHGHJUHH 0DVWHURI6FLHQFH %\ $ULHO(OED],PSURYHG&RQVWUXFWLRQVIRU([WUDFWLQJ4XDVL5DQGRP%LWV IURP6RXUFHVRI:HDN5DQGRPQHVV 6XSHUYLVRU3URI5DQ5D] 'HSDUWPHQWRI&RPSXWHU6FLHQFHDQG$SSOLHG0DWKHPDWLFV
More informationTopics in Fourier analysis - Lecture 2.
Topics in Fourier analysis - Lecture 2. Akos Magyar 1 Infinite Fourier series. In this section we develop the basic theory of Fourier series of periodic functions of one variable, but only to the extent
More informationLecture 4: Two-point Sampling, Coupon Collector s problem
Randomized Algorithms Lecture 4: Two-point Sampling, Coupon Collector s problem Sotiris Nikoletseas Associate Professor CEID - ETY Course 2013-2014 Sotiris Nikoletseas, Associate Professor Randomized Algorithms
More informationLecture 7: Passive Learning
CS 880: Advanced Complexity Theory 2/8/2008 Lecture 7: Passive Learning Instructor: Dieter van Melkebeek Scribe: Tom Watson In the previous lectures, we studied harmonic analysis as a tool for analyzing
More informationCertifying polynomials for AC 0 [ ] circuits, with applications
Certifying polynomials for AC 0 [ ] circuits, with applications Swastik Kopparty Srikanth Srinivasan Abstract In this paper, we introduce and develop the method of certifying polynomials for proving AC
More informationMAT 544 Problem Set 2 Solutions
MAT 544 Problem Set 2 Solutions Problems. Problem 1 A metric space is separable if it contains a dense subset which is finite or countably infinite. Prove that every totally bounded metric space X is separable.
More informationLec. 11: Håstad s 3-bit PCP
Limits of Approximation Algorithms 15 April, 2010 (TIFR) Lec. 11: Håstad s 3-bit PCP Lecturer: Prahladh Harsha cribe: Bodhayan Roy & Prahladh Harsha In last lecture, we proved the hardness of label cover
More informationNotes for Lecture 11
Stanford University CS254: Computational Complexity Notes 11 Luca Trevisan 2/11/2014 Notes for Lecture 11 Circuit Lower Bounds for Parity Using Polynomials In this lecture we prove a lower bound on the
More informationLecture 21: P vs BPP 2
Advanced Complexity Theory Spring 206 Prof. Dana Moshkovitz Lecture 2: P vs BPP 2 Overview In the previous lecture, we began our discussion of pseudorandomness. We presented the Blum- Micali definition
More information6.842 Randomness and Computation April 2, Lecture 14
6.84 Randomness and Computation April, 0 Lecture 4 Lecturer: Ronitt Rubinfeld Scribe: Aaron Sidford Review In the last class we saw an algorithm to learn a function where very little of the Fourier coeffecient
More informationCOMS E F15. Lecture 22: Linearity Testing Sparse Fourier Transform
COMS E6998-9 F15 Lecture 22: Linearity Testing Sparse Fourier Transform 1 Administrivia, Plan Thu: no class. Happy Thanksgiving! Tue, Dec 1 st : Sergei Vassilvitskii (Google Research) on MapReduce model
More informationLecture 12: Constant Degree Lossless Expanders
Lecture : Constant Degree Lossless Expanders Topics in Complexity Theory and Pseudorandomness (Spring 03) Rutgers University Swastik Kopparty Scribes: Meng Li, Yun Kuen Cheung Overview In this lecture,
More informationCSE 190, Great ideas in algorithms: Pairwise independent hash functions
CSE 190, Great ideas in algorithms: Pairwise independent hash functions 1 Hash functions The goal of hash functions is to map elements from a large domain to a small one. Typically, to obtain the required
More informationFinite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product
Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )
More informationImpagliazzo s Hardcore Lemma
Average Case Complexity February 8, 011 Impagliazzo s Hardcore Lemma ofessor: Valentine Kabanets Scribe: Hoda Akbari 1 Average-Case Hard Boolean Functions w.r.t. Circuits In this lecture, we are going
More informationRecall that any inner product space V has an associated norm defined by
Hilbert Spaces Recall that any inner product space V has an associated norm defined by v = v v. Thus an inner product space can be viewed as a special kind of normed vector space. In particular every inner
More informationLinear Algebra. Paul Yiu. Department of Mathematics Florida Atlantic University. Fall A: Inner products
Linear Algebra Paul Yiu Department of Mathematics Florida Atlantic University Fall 2011 6A: Inner products In this chapter, the field F = R or C. We regard F equipped with a conjugation χ : F F. If F =
More informationLecture 2: Continued fractions, rational approximations
Lecture 2: Continued fractions, rational approximations Algorithmic Number Theory (Fall 204) Rutgers University Swastik Kopparty Scribe: Cole Franks Continued Fractions We begin by calculating the continued
More informationAffine extractors over large fields with exponential error
Affine extractors over large fields with exponential error Jean Bourgain Zeev Dvir Ethan Leeman Abstract We describe a construction of explicit affine extractors over large finite fields with exponentially
More informationQuestions Pool. Amnon Ta-Shma and Dean Doron. January 2, Make sure you know how to solve. Do not submit.
Questions Pool Amnon Ta-Shma and Dean Doron January 2, 2017 General guidelines The questions fall into several categories: (Know). (Mandatory). (Bonus). Make sure you know how to solve. Do not submit.
More informationLow-discrepancy sets for high-dimensional rectangles: a survey
The Computational Complexity Column Eric Allender Rutgers University, Department of Computer Science Piscataway, NJ 08855 USA allender@cs.rutgers.edu With this issue of the Bulletin, my tenure as editor
More informationHigher-order Fourier analysis of F n p and the complexity of systems of linear forms
Higher-order Fourier analysis of F n p and the complexity of systems of linear forms Hamed Hatami School of Computer Science, McGill University, Montréal, Canada hatami@cs.mcgill.ca Shachar Lovett School
More informationThe Probabilistic Method
The Probabilistic Method Janabel Xia and Tejas Gopalakrishna MIT PRIMES Reading Group, mentors Gwen McKinley and Jake Wellens December 7th, 2018 Janabel Xia and Tejas Gopalakrishna Probabilistic Method
More informationMath 61CM - Solutions to homework 6
Math 61CM - Solutions to homework 6 Cédric De Groote November 5 th, 2018 Problem 1: (i) Give an example of a metric space X such that not all Cauchy sequences in X are convergent. (ii) Let X be a metric
More informationLecture Notes on Metric Spaces
Lecture Notes on Metric Spaces Math 117: Summer 2007 John Douglas Moore Our goal of these notes is to explain a few facts regarding metric spaces not included in the first few chapters of the text [1],
More information