Pseudorandomness for permutation and regular branching programs

Size: px
Start display at page:

Download "Pseudorandomness for permutation and regular branching programs"

Transcription

1 Pseudorandomness for permutation and regular branching programs Anindya De Computer Science Division University of California, Berkeley Berkeley, CA 94720, USA Abstract In this paper, we prove the following results about the INW pseudorandom generator : It fools constant width permutation branching programs with error ɛ using a seed of length O(log n log(1/ɛ)). It fools constant width regular branching programs with error ɛ using a seed of length O(log n (log log n + log(1/ɛ))). These results match the recent results of Koucký et al. (STOC 2011) and Braverman et al. and Brody and Verbin (FOCS 2010). However, our analysis gives a better dependence of the seed length on the width for permutation branching programs than the results of Koucký et al. (STOC 2011). Perhaps, more significantly, our proof method is entirely different and linear algebraic in nature as opposed to the group theoretic methods of [1] and the information theoretic and probabilistic methods of [2], [3]. Along the way, we also obtain pseudorandom generators for the small biased spaces for group products [4] with a seed length O(log n (log G +log(1/ɛ))). Previously, it was possible to get O(log n ( G O(1) + log(1/ɛ))) using the pseudorandom generator of [1]. Keywords-branching programs; INW generator; expander products I. INTRODUCTION One of the most fundamental questions in complexity theory is whether one can save on computational resources like space and time by using randomness. While it is known that randomness is indispensable in settings like cryptography and distributed computation, a long line of research [5], [6], [7], [8] has shown that assuming appropriate lower bounds on the circuit complexity of some functions, one can derandomize every randomized polynomial time algorithm i.e. show P = BPP. Since it is hard to unconditionally achieve non-trivial derandomization of complexity classes like BPP, attention was directed towards derandomization of low-level complexity classes where it may be possible to achieve unconditional results. In fact, it has also been shown that any non-trivial derandomization of BPP implies circuit lower bounds [9] which seem out of reach of the present state of the art. One of the most important problems in this line of research is to derandomize bounded space computation. The ultimate aim of this line of research is to prove RL = L i.e., to show that any computation that can be solved in randomized logspace can be simulated in deterministic logspace. Savitch [10] showed that RL NL L 2 i.e. randomized logspace computation and in fact, non-deterministic logspace computation can be simulated deterministically in O(log 2 n) space. Nisan [11] also showed that RL L 2 by constructing a pseudorandom generator (PRG) which can stretch a seed of length O(log 2 n) into n bits that fools logspace machines. In fact, Nisan s PRG fools read-once branching programs (which we define next) of polynomial length and width. Definition 1.1: A read-once branching program (BP) of width w and length n is a directed multilayer graph with n +1 layers such that each layer has w nodes with edges going from the i th layer to the (i+1) th layer (0 i n 1). For every node (except those in the last layer), there are exactly two edges leaving that node with one marked 0 and the other marked 1. There is a designated start state in the first layer and a set of accepting states in the (n + 1) th layer. Remark 1.2: In this paper, whenever we refer to branching programs, we mean read-once branching programs. We note that if the read-once restriction is not imposed, then in fact, width 5 branching programs capture NC 1 [12]. A BP is said to be a permutation branching program (PBP) if for any layer the transitions corresponding to 0, (resp. 1) are matchings. A BP is said to be a regular branching program (RBP) if the number of edges coming into every node is either 0 or 2. A given BP accepts an input x {0, 1} n if starting from the start state and following the path specified by the input, it ends in one of the accepting states. It is not hard to see that randomized log space computation is a uniform version of BPs with w = n O(1). Coming back to PRGs for branching programs, after [11], several papers [13], [14] improved on some parameters of the construction in [11]. However, improving on the O(log 2 n) seed remained (and continues to remain) open in the following minimal sense: It is not known how to construct a PRG with seed length o(log 2 n) seed with constant error for width 3 BPs. Faced with this difficulty, research was focussed on solving special cases of this problem with better seed lengths ([15],[16],[17]). Attention has also been directed towards

2 getting better seed length when there is some structural restriction on the BPs. In particular, when the branching program is regular, then Braverman et al. [2] and Brody and Verbin [3] constructed pseudorandom generators with seed length O(log n (log log n + log(1/ɛ))) which fool constant width branching programs with error ɛ. The dependence of the seed length on width was better for the latter paper which obtained O(log n (log w + log log n + log(1/ɛ))). Koucký, Nimbhorkar and Pudlák [1] improved on this further for the case of permutation branching programs. In particular, for fooling constant width permutation branching programs with error ɛ, they got a seed length of O(log n log(1/ɛ)). In their analysis, they transform this problem into the language of group products 1 and then analyze their construction using basic properties of groups. In this vein, we should also mention that Síma and Zák recently got a breakthrough by constructing a hitting set with seed length O(log n) for width-3 branching programs [18]. However, their techniques seem totally disjoint from other works in this line of research. A. Our results We present a pseudorandom generator which ɛ-fools permutation branching programs of length n and width w using a seed of length O(log n (w 8 + log(1/ɛ)). For regular branching programs, we get a seed length of O(log n (w 8 + log log n + log(1/ɛ)). The PRG we use is the INW generator [13] (as was in the previous works [3], [2], [1]). We remark that Koucký et al. obtained a seed length of O(log n (w! + log(1/ɛ)) for fooling permutation branching programs using the same generator. What we consider more significant is that our analysis is based on analyzing spectra of the stochastic matrices that arise in the transitions of the branching program. Thus, we see it is a more linear algebraic approach which might be helpful in other contexts as well. This is in contrast to the result of Koucký et al. which is based on a group theoretic approach and is thus difficult to adapt to more combinatorial settings. We remark here that ours is not the first result to use a linear algebraic analysis of the INW generator. Previously, Rozenman and Vadhan [19] used a linear algebraic analysis of the INW generator to prove SL = L (following Reingold s seminal result [20]). We also consider the small-bias spaces problem for group products, first considered by Meka and Zuckerman in [4]. Let G be a group. The task is to construct a distribution D over G n such that the following holds. Let x 1,..., x n {0, 1} and U G n be the uniform distribution over G n. Let g =(g 1,..., g n ) be sampled from D. Then, for any g G, Pr g D [gx gxn n = g] Pr [g x g U gxn n = g] ɛ G n The previous best result that could be obtained using the works of [4] and [1] is O(log n( G O(1) + log(1/ɛ)). We 1 we discuss this later show how to get O(log n(log G + log(1/ɛ)) with a much simpler analysis. Because of lack of space, the proof for both arbitrary regular and permutation branching programs is deferred to the appendix. Instead, we consider the much simpler case of fooling a particular kind of permutation branching program namely abelian group products (this is different from the problem in [4]). While the problem is much simpler, it has some of the ideas that will be used in the analysis for the more general case of permutation and regular branching programs. Secondly, the complexity of the analysis in [1] seemed to stem from the fact that the group underlying the permutation branching program has proper subgroups. We show that rather its the non-commutativity of the group that makes the analysis complicated. We next analyze the INW generator in detail and understand the main idea behind the improved analysis of the generator. II. TECHNICAL OVERVIEW A. Impagliazzo-Nisan-Wigderson generator The PRG used in this paper is the construction of Impagliazzo, Nisan and Wigderson [13] (hereafter referred to as the INW generator). We now describe their construction. First, let us recall the following important fact about construction of expander graphs [21]. Fact 2.1: For any n and λ> 0, there exists graphs on {0, 1} n with degree d = (1/λ) Θ(1) and second eigenvalue λ such that given any vertex x {0, 1} n and an edge label i [d], the i th neighbor of x is computable in n Θ(1) time. The INW generator is defined recursively as follows. Let Γ 0 : {0, 1} t {0, 1} be simply a function which maps a t bit string to its first bit. Assume Γ i 1 : {0, 1} m {0, 1} l. Then Γ i : {0, 1} m+log d {0, 1} 2l is defined as follows. Let x = y z {0, 1} m+log d such that y is m bits long and z is log d bits long. Let H be a graph on 2 m vertices constructed using Fact 2.1. Let y be the z th neighbor of y in H. Then Γ i (x) =Γ i 1 (y) Γ i 1 (y ). Here and elsewhere, is used to denote concatenation. From the above, one can easily see that Γ i : {0, 1} t+i log d {0, 1} 2i. As we can put t to be anything, we get that Γ log n : {0, 1} log n log d {0, 1} n. As d = (1/λ) Θ(1), the INW generator stretches a seed of length O(log n log(1/λ)) to n bits. Remark 2.2: In order to tackle the problem of small biased spaces for group products, it will be necessary for us to define the INW generator which produces elements from a bigger alphabet. The construction in this case is as follows : we assume that we want to produce elements from some set G. For some t log G, let Γ 0 : {0, 1} t G be simply a function whose output is the first log G bits and interprets it as an element of G. Assume Γ i 1 : {0, 1} m G l. Then Γ i : {0, 1} m+log d {0, 1} 2l is defined as follows. Let x = y z {0, 1} m+log d such that y is m bits long and z is

3 log d bits long. Let H be a graph on 2 m vertices constructed using Fact 2.1. Let y be the z th neighbor of y in H. Then Γ i (x) =Γ i 1 (y) Γ i 1 (y ). From the above, one can easily see that Γ i : {0, 1} t+i log d G 2i. As we can put t to be anything as long as it is at least log G, we get that Γ log n : {0, 1} log G +log n log d G n. As d = (1/λ) Θ(1), the INW generator stretches a seed of length O(log G + log n log(1/λ)) to n bits. We now get back to the analysis of the INW generator. For the purposes of this discussion, we assume that the INW generator is producing bit strings as opposed to elements in G. B. Analysis of INW generator in terms of stochastic matrices To understand the analysis of the INW generator from [13] as well as the improvements in this paper, it is helpful to look at branching programs from the following viewpoint. Assume that the branching program is of width w. Then states in every layer can be numbered from 1 to w. Also for x, y [w] and b {0, 1}, we introduce the notation (x, i, b) (y, i + 1) if there is an edge labelled b going from vertex x in layer i to vertex y in layer i +1. Now, for every layer 0 i<n, we can introduce two stochastic matrices M 0i and M 1i (we interchangeably call them walk matrices as well) which are defined as { 1 if (x, i, b) (y, i + 1) M bi (x, y) = 0 otherwise Now, assume that we start with a probability distribution x R w over the states in the 0 th layer and then the string chosen is y, then the probability distribution on the states in the final layer is given by x T n 1 i=0 M y ii. Since any string is chosen with probability 1/2 n, the final distribution is given by n 1 xt i=0 M x ii 2 n = x T x {0,1} n ( n 1 i=0 ( ) ) M0i + M 1i If instead, the y s are drawn from a distribution D, then the distribution on the final layer will be n 1 x T y {0,1} n D(y) i=0 M yii Thus, our aim is to find a distribution D which can be sampled with a few bits of randomness and ( n 1 n 1 ( ) ) M yii M0i + M 1i 2 ɛ y {0,1} n D(y) i=0 i=0 We are not specific about the norm in the above statement as any reasonable norm will do. In particular, we will be using both the operator norm as well as the Frobenius norm for 2 matrices. Indeed, note that for a constant sized matrix, these are within constant factors of each other. We also observe that the product of stochastic matrices is a stochastic matrix. We now define and understand the concept of expanderproduct of distribution of matrices. C. Comparison of the product and the expander product of matrices For this subsection and the next, we assume that we will only be dealing with permutation branching programs. The analogous description for regular branching programs is slightly more complicated and is deferred to the full version. We recall that the operator norm of a permutation matrix M (denoted by M 2 is always bounded by 1. An important property of the operator norm is that it is submultiplicative i.e., X Y 2 X 2 Y 2. Let us assume that Γ 1, Γ 2 : {0, 1} r {0, 1} n and ρ 1,ρ 2 : {0, 1} n C m m. Assume that ρ 1 (x) 2, ρ 2 (x) 2 1 for all x {0, 1} n (this will be the case for permutation branching programs). Consider the following two sums; A = 1 2 r ρ 1(Γ 1 (x)) B = 1 2 r ρ 2(Γ 2 (x)) x {0,1} r x {0,1} r Then the product of A and B is given by x,y {0,1} r 1 2 2r ρ 1(Γ 1 (x))ρ 2 (Γ 2 (y)) = A B We now consider a 2 d regular graph H on {0, 1} r with second eigenvalue bounded by λ. We define the expander product as A H B = x {0,1} r,(x,y) E(H) 1 2 r+d ρ 1(Γ 1 (x))ρ 2 (Γ 2 (y)) We note that without specifying the functions ρ i, Γ i, it is not possible to concretely define A H B. However, specifying all the parameters, makes the definitions and applications cumbersome. So, we sacrifice some accuracy for the sake of clarity. The ρ i s and Γ i s will be clear from the context. The relation between the above definitions and the INW generator and branching programs is obvious. Let us consider a branching program of length 2 m+1. Now, let Γ m : {0, 1} t {0, 1} 2m be the instantiation of the INW generator which stretches t bits to 2 m bits. Let us define ρ 1 (x) = i<2 M m x ii and ρ 2 (x) = i<2 M m x i(i+2 m ). Hence, A and B correspond to the walk matrices for the first and the second half of the branching program provided the input to both the halves is sampled from the output of Γ m. Now, if we use independent seeds for the first and the second applications of Γ m, then the transition matrix for the entire branching program is A B. On the other hand, we can have another application of the INW generator and hence define Γ m+1 : {0, 1} t+d {0, 1} 2m+1 as Γ m+1 (x, j) = Γ m (x) Γ m (H(x, j)) where (1)

4 H(x, j) denotes the j th neighbor of x in H. In this case, it is easy to see that the transition matrix corresponding to the entire branching program when the input is sampled from the output of Γ m+1 is A H B. Lemma 2.3: Let ρ 1,ρ 2 : {0, 1} m C w w such that x {0, 1} m,j {1, 2}, ρ j (x) 2 1. Let Γ 1, Γ 2 : {0, 1} t {0, 1} m. Let H be a 2 d -regular graph on {0, 1} r with second eigenvalue λ. Then, for A and B as defined in (1), A B A H B 2 λ We also have the following two observations. If either of ρ 1 or ρ 2 is a constant function, then A B A H B =0. If ρ 1 and ρ 2 are identity on some subspace W, then A B as well as A H B are identity on W. Proof: Let us define X, Y C w w2t as follows. Both X and Y are divided into 2 t blocks such that the i th block in X is ρ 1 (Γ 1 (i)) and the i th block in Y is ρ 2 (Γ 2 (i)). Also, let us define matrices Λ 1, Λ 2 C w2t w2 t as follows. Λ 1 = Λ H Id and Λ 2 =Λ K Id where Id is the w w identity matrix, Λ H is the random walk matrix for graph H and Λ K is the random walk matrix for a clique on 2 t vertices. Then, A B A H B =(X(Λ 1 Λ 2 )Y T )/2 t However, we also note that by definition of second eigenvalue, Λ H Λ K = C such that C λ. We also recall that W 1 W 2 2 W 1 2 W 2 2 and that the operator norm is sub-multiplicative. Now, observe that (X(Λ 1 Λ 2 )Y T )/2 t =(X/ 2 t )(C Id)(Y T / 2 t ) Also, as each block of X is a matrix whose norm is at most 1, hence X/ 2 t 2 1. Similarly, Y T / 2 t 2 1. Putting everything together, we prove the first part of the claim. The last two observations follow from the fact that the graph H is regular. D. Basic error analysis of the INW generator We would now like to put an upper bound on the probability with which the branching program can distinguish between the output of the INW generator and the uniform distribution. Equivalently, let M be the average walk matrix between the 0 th and the n th layer when the input is uniformly random. Let M be the average walk matrix when the input is chosen from the output of the INW generator. We would like to put an upper bound on M M. In order to do this, we will consider two trees. Both of these will be full binary trees. The leaf nodes are numbered 1 to n from left to right. The first tree represents the average walk matrix (at various points in the branching program) when the input is sampled from the output of the INW generator. The second tree represents the average walk matrix (at various points in the branching program) when the input is uniform. We call the first tree the pseudo tree and the second one the true tree. Without loss of generality, we consider n to be a power of 2. Consider any node x (in any of the trees) at height m. Assume that the leaves in the subtree rooted at x are numbered {i,..., j = i +2 m 1}. Also, let us define Γ m : {0, 1} t {0, 1} 2m to be the INW generator which stretches t bits into 2 m bits. Let Γ m : {0, 1} 2m {0, 1} 2m be the identity function. 2. Further, we define ρ x (w) = i t j M w tt. The labeling L(x) is then as follows { w {0,1} 1 L(x) = 2m 2 ρ x(γ m(w)) if x is in the true tree 2m w {0,1} 1 t 2 ρ t x (Γ m (w)) if x is in the pseudo tree Thus, with the above labeling, L(x) is simply the average walk matrix to go from layer i to layer i +2 m when the input is sampled from the uniform distribution (in case of the true tree) or the output of the INW generator (in case of the pseudo tree). We adopt the following convention. Whenever, we talk about a node x in the tree, it refers to the corresponding nodes in both the true and the pseudo tree. To refer to the corresponding node in the true tree, we refer to it as x t and for the pseudo tree, we refer to it as x p. We now observe the following : Let x be a node and y and z be its left and right children. L(x t )=L(y t ) L(z t ) L(x p )=L(y p ) H L(z p ) Claim 2.4: Let x be a node at height t. Then L(x t ) L(x p ) 2 2 t λ. Proof: Clearly, the claim holds when t =0. We assume it holds for t t 0 and prove it for t = t Let x be at height t 0 +1 and its children be y and z at height t 0. L(x p ) L(x t ) 2 = L(x p ) L(y p )L(z p )+L(y p )L(z p ) L(y t )L(z t ) 2 L(x p ) L(y p )L(z p ) 2 + L(y p )L(z p ) L(y t )L(z t ) 2 λ + L(y p )L(z p ) L(y t )L(z t ) 2 λ + L(y p ) L(z p ) L(z t ) 2 + L(z t ) 2 L(y p ) L(y t ) 2 λ +2 2 t0 λ< 2 2 t0+1 λ The above claim clearly shows that to have a total error of ɛ, it suffices to have λ =2ɛ/n which means that the INW generator has seed of length O(log n log(n/ɛ)). The more important aspect of the above analysis is that while we pessimistically assume that L(y p ) 2, L(z t ) 2 are as large as 1, but in general it can be much smaller. In particular, if they are both bounded by say 1/3, then the error will not increase with height. Of course, in general, this is not true and it can be the case that L(y p ) 2 = L(z t ) 2 =1but 2 For a bit string w of length n and position t [n], w t represents the t th bit of w

5 then we show that in fact, there is no error incurred when one does expander product versus true product! 3 It is interesting to note that the results in [2], [3], [1] as well as our result beats the above naive analysis for regular (or permutation) branching programs by a clever analysis of the INW generator. In contrast, results in [15], [16], [17] which are directed towards symmetric functions use a combination of hash functions and INW generator. It seems hard to use hash functions for general branching programs because the main purpose of hashing in these constructions is to rearrange weights so that they are evenly spread out. However, unless one is guaranteed that the function being computed by the branching program is invariant under permutations, it seems impossible to use hash functions. We now consider a very simple case of a permutation branching program (and indeed, a regular branching program) namely a permutation branching program with an underlying abelian group. III. FOOLING ABELIAN GROUP PRODUCTS USING THE INW GENERATOR Assume that we have been given an abelian group G and g 0,..., g n 1 G. Further, for a, b G, a b represents the group operation applied on a and b. Let us also define { g x 1 if x =0 = g if x =1 Consider the distribution over group G obtained by sampling x 0,..., x n 1 uniformly at random and considering the product g x0 0 gx gxn 1. The following is the main theorem of this section. Theorem 3.1: Let Γ : {0, 1} t {0, 1} n be the INW generator with λ = ɛ/ G 7. Consider the distribution D = {g x0 0 gx gxn 1 n 1 :(x 0,..., x n 1 ) Γ(U t )} D = {g x0 0 gx gxn 1 n 1 :(x 0,..., x n 1 ) U n } Then D and D are ɛ close in statistical distance. As the seed length required for the INW generator is O(log n log(1/λ)), we get the following corollary. Corollary 3.2: There exists a polynomial time computable function Γ: {0, 1} t {0, 1} n where t = O(log n (log m + log(1/ɛ)) such that for every abelian group G of size m and g 1,..., g n G, the distributions D and D defined as D = {g x0 0 gx gxn 1 n 1 :(x 0,..., x n 1 ) Γ(U t )} D = {g x0 0 gx gxn 1 n 1 :(x 0,..., x n 1 ) U n } are ɛ-close in statistical distance. In order to prove theorem 3.1, we first need to go over some basic Fourier analysis. 3 This is not exactly true but we make it precise later Definition 3.3: A character χ : G C is a group homomorphism i.e., for x, y G, χ(x y) =χ(x)χ(y). Any abelian group G has G distinct characters including the trivial character which maps every element to 1. Definition 3.4: For a distribution D : G [0, 1] and a character χ : G C, we define ˆD(χ) = x G χ(x)d(x). Note that this differs from the standard definition of fourier coefficient of a function by a normalization factor. For any element g G, consider the matrix R g which is defined as follows : { 1 if xy 1 = g R g (x, y) = 0 otherwise First of all, we observe that all the matrices of the form R g commute with each other. This is because the underlying group is a commutative group. This implies that they are simultaneously diagonalizable in some basis. In fact, R g = Γ ρ(g) Γ 1 where Γ is a unitary matrix and ρ(g) is a diagonal matrix. R g =Γ diag[χ 1 (g),..., χ G (g)] Γ 1 In the above, χ 1,..., χ n are the different characters of group G. Now, note that we can define the group products problem as a branching program. More precisely, we have G states at every level corresponding to each of the group elements. From level i to level i +1, if the input is 0, then g, g g. If the input is 1, then g, g gg i. Therefore, the walk matrices are M 0i = Id (Id is the G G identity matrix) and M 1i = R(g i ). We now make the following observation. Observation 3.5: Let x p be the root node of the pseudo tree and x t be the root node of the true tree with the parameters of the pseudo tree same as in Theorem 3.1. Also, the walk matrices at the i th step are M 0i and M 1i as defined above. Let L(x p ) and L(x t ) be the labels of x p and x t respectively. If L(x p ) L(x t ) 2 ɛ/ G, then Theorem 3.1 follows. Proof: Note that distribution obtained in case of the pseudo tree is given by e 1 L(x p ) where e 1 is the standard unit vector with 1 at the position of the group identity and 0 everywhere else. Similarly, the distribution in case of the true tree is e 1 L(x t ). Note that the statistical distance between the two distributions is given by e 1 (L(x p ) L(x t )) 1 G e 1 (L(x p ) L(x t )) 2 G (L(x p ) L(x t )) 2 which proves our claim. In order to prove that L(x p ) L(x t ) 2 is small, we make the following very important observation. Observation 3.6: Corresponding to the walk matrix M bi, let us assume the diagonalized matrix is ρ bi. Since, all the matrices are simultaneously diagonalizable, we can assume that walk matrices at the leaf nodes of the pseudo and the

6 true trees are ρ bi instead of M bi. The labeling for the nonleaf nodes is generated in the same way as before. More precisely, in the true tree, the label of a non-leaf node is the product of the labels of its two children while in the pseudo tree, the label of a non-leaf node is the expander product of its two children (the expander being the underlying expander of the INW generator). Also, since all the matrices in the leaf nodes are diagonals and hence each of the intermediate products are also diagonal (in both the trees), therefore to bound L(x p ) L(x t ) 2, it suffices to put an upper bound on every diagonal entry of L(x p ) L(x t ). From the above observation, it suffices to show that for any i G, L(x p )[i] L(x t )[i] ɛ/ G. Here, for a matrix A, A[i] represents its i th diagonal entry. Lets fix a particular i [ G ]. Claim 3.7: Consider any node x in the true tree. Let L(x) be its labeling. Consider the i th diagonal entry of L(x). Then either the entry is 1 or it is at most 1 1/ G 2 in magnitude. Further, it is 1 if and only if the corresponding diagonal entry is 1 for each of the walk matrices (now they are diagonalized) in all the leaf nodes. Proof: We first prove the claim for the leaf nodes. Note that the diagonal entries correspond to the characters. So, if the i th diagonal entry corresponds to character χ i. Hence the i th diagonal entry of the t th leaf is labeled by 1/2(χ i (e)+χ i (g t )) = 1/2(1 + χ i (g t )). However, note that because character is a homomorphism, hence χ i (g t ) G = 1. Hence, χ i (g t )=e 2πia G 1+ω 2 = cos ( πia G where 0 <a< G. ) ( πi cos G ) 1 1 G 2 This gives us the result for leaf nodes. Now, assume the hypothesis to be true for nodes at a height h < t 0. Consider a node x t at height t 0. For non-leaf nodes, we observe that if x t is a node and y t and z t are its children, then the i th diagonal entry of x t is the product of the corresponding entries in y t and z t. By induction hypothesis, unless both y t and z t are 1 in magnitude, at least one of them is going to be at most 1 1/ G 2 in magnitude. Hence, so is x t. Also, if both y t and z t are 1, so is z t but by induction hypothesis we get that all the leaves in the tree rooted at y t and z t have walk matrices whose i th diagonal entry is 1. This proves the claim for x t as well. For the rest of the discussion, we fix the i and let L (x) denote L(x)[i] for any node x. The next claim shows that for any node x t in the true tree, if the i th diagonal entry is at least 1/10 in magnitude, then the corresponding entry in x p is within an error of λ G 4 from it. All the calculations in this section use that λ G 6 < 1/10 (eventually we set λ = ɛ/ G 7 ). More precisely, we have the following claim. Claim 3.8: Let x t be a node in true tree such that its labeling L (x t )=l i satisfies l i 1/10. Then, L (x t ) L (x p ) λ G 4 log(1/ l i ). Proof: We prove it by induction on the height of the node x t. Note that it is trivially true for the leaves as the marginal of the INW generator on any coordinate is uniformly random. Let us assume it is true for nodes at height < h. Let x be a node at height h. We break it into two situations. Let the children of x be y and z. Now, assume that at least one of L (y t ) or L (z t ) is 1. Let us assume without loss of generality that it is the former case. Then L (x t ) = L (z t ). Also, we note that L (x p )=L (z p ) (We do not incur an error λ because of Lemma 2.3.) We note that by induction hypothesis, we have L (z p ) L (z t ) λ G 4 log(1/ L(z t ) ). However, L (x p ) L (x t ) = L(z p ) L(z t ) λ G 4 log(1/ L (z t ) ) =λ G 4 log(1/ l i ) Next, we consider the case when both L (y t ) 1 1/ G 2 and L (z t ) 1 1/ G 2. Let L (y p )=L (y t )+ɛ y L (z p )=L (z t )+ɛ z Let us assume without loss of generality that L (y t ) L (z t ). Then, for some ɛ such that ɛ λ L (x p ) L (x t ) = ɛ y L (z t )+ɛ z (L (y t )+ɛ y )+ɛ ɛ z + ɛ y L (z t ) + λ ( λ G 4 log(1/ L (z t ) )+ 1 1 ) G 2 λ G 4 log(1/ L (y t ) )+λ λ G 4 log(1/ L (z t ) L (y t ) ) =λ G 4 log(1/ L (x t ) ) The last inequality uses the fact that L (y t ) 1 1 G. 2 We next show that for any node x t in the true tree, if the i th diagonal entry is at most 1/10 in magnitude, then the corresponding entry in x p is within an error of λ G 6 from it. Claim 3.9: Let x t be a node in true tree such that L (x t )=l i satisfies l i < 1/10. Then, L (x t ) L (x p ) λ G 6. Proof: Let the children of x be y and z. Let us assume by induction hypothesis that the claim holds for all nodes below x. We consider the following four cases : At least one of L (y t )=1or L (z t )=1. Assume L (y t ) = 1. Both L (y t ) 1/10 and L (z t ) 1/10 Both L (y t ) < 1/10 and L (z t ) < 1/10 Exactly one of L (y t ) and L (z t ) is less than 1/10. In the first case, note that by induction hypothesis, the claim holds for y and z. Now, as one of the entries is 1, hence by Lemma 2.3, we see that L (x p )=L (y p ) H L (z p )=L (y p ) L (z p )=L (z p ) As L (z p ) L (z t ) λ G 6, L (x p ) L (x t ) λ G 6. In the second case, by a basic union bound, we get that the error is at most 2 log 10 λ G 4 + λ λ G 6.

7 For the next two cases, let us write L (z p )=L (z t )+ɛ z and L (y p )=L (y t )+ɛ y. For the third case, ɛ y, ɛ z λ G 6. Then, for some ɛ such that ɛ λ, L (x p ) L (x t ) = ɛ y L (z t )+ɛ z (L (y t )+ɛ y )+ɛ Also one of their theorems coupled with the pseudorandom ɛ y L (z t ) + ɛ z L (y t ) + ɛ z ɛ y + λgenerator from [1], gives the following result. For every λ G 6 + λ G 6 ɛ> 0, there exists Γ: {0, 1} t G n with t = O(log n + λ 2 G 12 + λ λ G 6 (log(1/ɛ)+ G )) such that if D and D are as defined above, then For the last part, assume that L (y t ) 1/10 and L (z t ) 1/10. Hence, for this part, we have that ɛ y λ G 6 and Pr [x = h] Pr ɛ z (log 10) λ G 4 x D x D [x = h] ɛ. Again, we have that, L (x p ) L (x t ) ɛ y L (z t ) + ɛ z L (y t ) + ɛ z ɛ y + λ ( 1 1 ) G 2 λ G λ G 4 log 10 + λ 2 G 10 + λ λ G 6 Combining Claims 3.8 and 3.9, we get the following lemma which combined with Observation 3.6 implies Theorem 3.1. Lemma 3.10: Let x t be a node in true tree and x p be the corresponding node in the pseudo tree. Then L (x t ) L (x p ) λ G 6. This implies that L(x t ) L(x p ) 2 λ G 6 ɛ/ G for λ = ɛ/ G 7. IV. SMALL BIASED SPACES FOR GROUP PRODUCTS Next, we introduce the problem of small biased spaces for group products. Let G be an arbitrary group and let x 1,..., x n {0, 1}. Again, for a, b G, we let a b denote the group operation applied on a and b. We also remind ourselves that for g G and x {0, 1}, { g x 1 if x =0 = g if x =1 Consider the distribution D = g x gxn n where g 1,..., g n G are chosen independently and uniformly at random. We seek to come up with an efficiently computable function Γ : {0, 1} t G n such that when g 1,..., g n is sampled from Γ(U t ) then D = g x gxn n is ɛ-close to D in statistical distance. The aim is to keep t as small as possible and a probabilistic argument shows that it is possible to get t = O(log G + log n + log(1/ɛ)). The task of getting an explicit function Γ was first considered by Meka and Zuckerman [4]. They obtained the following result. Theorem 4.1: There exists some fixed constant c = c(g) < 1/ G and Γ: {0, 1} t G n with t = O(log n) such that if D and D are as defined above, then for any h G Pr [x = h] Pr x D x D [x = h] c We improve on their result in terms of dependency on the seed length on the size of the group. Also, as we shall see, to get this improvement, we do not require the whole machinery of [1] and our proof is rather short and simple. In particular, we prove the following theorem. Theorem 4.2: Let Γ: {0, 1} t G n denote the INW generator with λ = ɛ/ G. If X denotes the output distribution of Γ(U t ), Pr (g [gx1 1 1,...,g n) X... gxn n = h] Pr (g 1,...,g n) G n[gx gxn n As a corollary, using the INW generator describe in Remark 2.2, we get that there is a polynomial time computable function Γ: {0, 1} t G n with t = O(log n (log G + log(1/ɛ)) such that if X denotes the output distribution of Γ(U t ), Pr (g [gx1 1 1,...,g n) X... gxn n = h] Pr (g 1,...,g n) G n[gx gxn n We need some basic representation theory for this proof which we review below. Definition 4.3: Let V be any vector space over C. Then GL(V ) is the group of all invertible linear transformations from V to V with the group operation being the function composition. Definition 4.4: For a group G and a vector space V,a map ρ : G GL(V ) is said to be a representation of G if ρ is a group homomorphism. In this paper, we only consider vector spaces V over C. Definition 4.5: A group representation ρ : G GL(V ) is said to be irreducible if it does not have an invariant subspace. In other words, if for W V, g, ρ(g)w W, then W = {0}. We say two representations ρ 1 and ρ 2 are isomorphic if there exists an invertible matrix τ such that for every g G, τ ρ 1 (g) τ 1 = ρ 2 (g). Theorem 4.6: (Maschke) Let V be any finite dimensional vector space over C, G be a finite group and ρ : G GL(V ) be a representation. Then ρ can be written as a direct product of the irreducible representations of group G. We next state Schur s lemma. Lemma 4.7: Let ρ and τ be two non-isomorphic irreducible representations of a group G. Then, τ i,j,ρ k,l = E[τ i,j (g)ρ k,l (g)] = 0 = h] ɛ = h] ɛ

8 τ i,j,τ k,l = δ i,kδ j,l d τ where d τ is the dimension of τ. We remark that every group has a trivial irreducible representation ρ t : G GL(C) such that x, ρ(x) 1. We now state the following simple corollary of Schur s lemma and the earlier observation. Corollary 4.8: Let ρ : G GL(V ) be a non-trivial irreducible representation of G. Then x G ρ(x) =0. Proof: By Schur s lemma, we get that if τ is the trivial representation, then for any i, j, τ i,j,ρ k,l = 0 which implies the claim. We now return to the problem of constructing small bias spaces over group products. We use the INW generator described in Remark 2.2. We now state the analogue of lemma 2.3 in this setting. To do this, let us assume that Γ 0, Γ 1 : {0, 1} r G m and ρ : G m C w w and let us define A and B as following : A = 1 2 r ρ(γ 1 (x)) B = 1 2 r ρ(γ 2 (x)) x {0,1} r x {0,1} r (2) Lemma 4.9: Let ρ 1,ρ 2 : G m C w w such that x {0, 1} m,j {1, 2}, ρ j (x) 2 1. Let Γ 1, Γ 2 : {0, 1} t G m. Let H be a 2 d -regular graph on {0, 1} r with second eigenvalue λ. Then, for A and B as defined in (2), A B A H B 2 λ We also have the following two observations. If either of ρ 1 or ρ 2 is a constant function, then A B A H B =0. If A and B are identity on some subspace W, then A B as well as A H B are identity on W. We now formulate the problem of constructing small biased spaces for group products in terms of getting pseudorandomness for a certain permutation branching program (for fooling which, we use the INW generator). The branching program has n +1 layers. Each layer consists of G many vertices, each vertex labeled by an element of G. The branching program starts at the identity element of G in the 0 th layer. Now, if x i =1, the branching program moves from x in the (i 1) th layer to x g i in the i th layer. On the other hand, if x i =0, the branching program moves from x in the (i 1) th layer to x in the i th layer. Thus, we have G walk matrices for the transition from the (i 1) th layer to the i th layer. We call them M hi for every h G. Further, if x i =0, they are all the identity matrix. In case, x i =1, then M hi is defined by { 1 x 1 y = h M hi (x, y) = 0 otherwise We observe that if we take a random walk starting at the vertex corresponding to the identity element of group G in the zeroth layer and then choose a random walk matrix among M hi to go from layer i 1 to layer i, then after j steps, the distribution on the j th layer is exactly the distribution g x gxj j where g i s are chosen uniformly at random. Hence, it suffices to analyze the error for this branching program when g i s are chosen from the output of the INW generator in order to prove Theorem 4.2. We next make the observation that the walk matrices can be block diagonalized such that the blocks have some nice properties. Observation 4.10: If x i = 1, the map h M hi is a group representation. This implies that there is a basis transformation in which all the walk matrices can be simultaneously block diagonalized. Note that this is because if x i =0, then all the walk matrices are identity and hence after any change of basis transformation, it will remain identity in every block. If x i =1, then each of the blocks in the walk matrices M hi correspond to some irreducible representation. The corresponding blocks in M hi when x i =0is always the identity. In particular, the following is true. If the block corresponds to the trivial representation, then the corresponding block is the 1 1 identity matrix in all the walk matrices. If the block corresponds to a non-trivial representation, then the following is true. If x i =0, then all the walk matrices are identity in that block. If x i =1, then the sum of the walk matrices in that block is identically zero. Also, all the blocks are unitary matrices because they correspond to irreducible representations. The next observation says that since all the walk matrices can be simultaneously diagonalized, we might as well treat the blocks individually and analyze the error incurred by using the INW generator vis-a-vis the uniform distribution in each of the blocks. Observation 4.11: Since all the walk matrices are simultaneously diagonalizable, we can assume that the leaves of the pseudo tree for the INW generator as well as the true tree are marked by these block diagonalized matrices as opposed to the true walk matrices. Also, consider a particular block corresponding to the representation ρ. Let us instead label the leaf nodes (in both the trees) by identity matrix of dimension d ρ if x i =0and ρ h for all h G if x i =1. The labels for the non-leaf nodes in the true and the pseudo tree are generated by taking the true and the expander products of the children respectively. For a node x, we describe this labeling as L (x). If we prove that for any node x p in the pseudo tree and node x t in the true tree and any representation ρ, the labeling L (x p ) and L (x t ) satisfy L (x p ) L (x t ) 2 ɛ/ G, then it implies that the INW generator fools the branching program with error ɛ. So, we now treat each block individually. Let us fix a representation ρ and analyze the difference between labellings

9 in the true tree an the pseudo tree. If the representation corresponding to the block is trivial, then the following claim says that there is no error between the pseudo tree and true tree. Claim 4.12: Let x p and x t be corresponding nodes in the pseudo tree and the true tree respectively. Let L (x p ) and L (x t ) be the labeling of x p and x t with respect to the trivial representation. Then, L (x p )=L (x t ) = 1 Proof: Note that if we consider the trivial representation, then all the leaf nodes in both the true and the pseudo tree are labelled by matrices which are just the 1 1 identity matrix. Now, clearly the labeling of the leaf nodes in both the pseudo and true tree is identical namely 1. We now use induction on the height of a node. Let x p be a node at height t in the pseudo tree and x t be the corresponding node in the true tree. By induction hypothesis, the labeling of the children of x t namely y t and z t is 1. Similarly, the labeling of y p and z p is also 1. Now as L (x t )=L (y t )L (z t ) = 1. Further, note that L (x p )=L (y p )=1by Lemma 4.9 as the labeling is 1 on all the leaf nodes under z p. This proves the lemma. Next, we consider the case when the representation ρ is nontrivial. Claim 4.13: Let x p and x t be corresponding nodes in the pseudo tree and the true tree respectively. Let L (x p ) and L (x t ) be the labeling of x p and x t corresponding to the representation ρ. Then, L (x p ) L (x t ) 2 2λ. Proof: We prove this by induction. We observe that the claim holds for the leaf nodes. We first observe that for any node x t in the true tree, L (x t ) is either the identity matrix or L (x t )=0. This is because if x i =0, then the i th leaf node is labeled by the identity matrix. On the other hand, if x i = 1, then the leaf node is labeled by 0 (as ρ(h) =0). h G Clearly, for a node x t, L(x t )=1if and only if all the leaf nodes below it are marked 1, else it is labeled 0. Now, consider any node x t in the true tree and its corresponding node x p. Let the children of x t be y t and z t, such that one of them, lets say L (y t )=Id. Then, by Lemma 4.9, L (x t )= L (z t ) and L (x p )=L (z p ). However, by induction on the height of the tree, we can assume that L (z p ) L (z t ) 2 2λ which implies that L (x p ) L (x t ) 2 2λ. So, we assume that both L(y t ) and L(z t ) are 0. In that case, by induction hypothesis, we can assume that L(y p ) 2 2λ. Similarly, L(z p ) 2 2λ. By Lemma 4.9, we can say that L(x p ) L(y p ) L(z p ) 2 λ. This implies that L(x p ) 2 λ +4λ 2 2λ (provided λ< 1/10). The above two claims together with Observation 4.11 implies that it suffices to have λ = ɛ/2 G to get an overall error of ɛ and hence get theorem 4.2. V. PSEUDORANDOMNESS FOR PERMUTATION AND REGULAR BRANCHING PROGRAMS In this section, we sketch the proof of the following theorem (the full proof is deferred to the full version). Theorem 5.1: Let Γ : {0, 1} t {0, 1} n be the INW generator with λ = ɛ 2 w8. Then the output of Γ(U t ) is ɛ-indistinguishable from U n for read-once width w permutation branching programs of length n. We remark that the output of the INW generator with λ = ɛ 2 w8 log n is ɛ-indistinguishable from U n for read-once width w regular branching programs of length n. Towards the end of the paper, we will sketch some of the changes that need to be made to analyze regular branching programs. Using the standard INW generator, we get the following corollary. Corollary 5.2: There is a polynomial time computable function Γ: {0, 1} t {0, 1} n with t = O(log n (w 8 + log(1/ɛ)) such that Γ(U t ) is ɛ-indistinguishable from U n for read-once permutation branching programs of width w. Similarly, there is a polynomial time computable function Γ: {0, 1} t {0, 1} n with t = O(log n (log log n + w 8 + log(1/ɛ)) such that Γ(U t ) is ɛ-indistinguishable from U n for read-once regular branching programs of width w. We now describe the overall strategy for proving Theorem 5.1 for permutation branching programs and later outline the modification for regular branching programs. As described in Section II, we will label the leaf nodes in both the INW tree as well as the true tree with the walk matrices corresponding to the branching program. In particular, the label for the i th leaf node will simply be the average of the walk matrix for the transition corresponding to 0 and that corresponding to 1 (recall, we call them M 0i and M 1i ). Subsequently, the label of the non-leaf node is the product of the labels of its children in the true tree and the expander product in case of the pseudo tree. Much like the case for abelian groups, for a node x t in the true tree and the corresponding node x p in the pseudo tree, we would like to say that L(x p ) L(x t ) is a function of λ and L(x t ) alone. However, in the case of abelian groups, we had the luxury of diagonalizing and treating each coordinate individually. Instead, here we cannot do that. We instead try to adopt the following strategy. For discussing the intuition further, it will be helpful to introduce the following concepts. Definition 5.3: For a matrix M C w w, we say W is the fixed point subspace of M if W is a maximal subspace such that for all x W, x M = x. We note that for a given matrix M, the fixed point subspace is uniquely defined. Further, for the matrix M, we define its non-trivial subspace to be W. Definition 5.4: For a matrix A C m m and for its non-trivial subspace W C m, we define A W = max w W wa w

10 For a matrix A C m m and for a subspace W C m, we define A F,W = v i A A v i where v i s form an orthonormal basis of W. Further, we note that A F,W is independent of the choice of basis for W and is the same as A F when W = C m. Coming back to the structure of the proof, we show that for any node x t in true tree and the corresponding node x p, L(x p ) L(x t ) can be bounded as a function of the dimension of the non-trivial subspace of L(x t ) (call it W ) and L(x t ) W. In particular, we will allow the error to grow as the dimension of W increases or L(x t ) W decreases. In fact, the dependence of the error on the dimension of the non-trivial subspace i.e., W shall dominate the dependence of the error on the norm of the label on its non-trivial subspace. The proof shall proceed by induction on the height of the nodes. To convey the intuition, let us say that we will claim that for any pair of corresponding nodes x t and x p, L(x p ) L(x t ) f(α)g(β)λ where if W is the non-trivial subspace of L(x t ), then α = L(x t ) W and β = dim(w ). Further, let our choice be such that lim λ 0 λ f(α) g(β) =0 This ensures that we can indeed choose λ such that if we just want constant error, then we need to choose some constant λ dependent on w. Now, assume that this holds by induction hypothesis till some height. Consider the inductive step. Let x t be a node in the true tree and y t and z t be its children. Let x p, y p and z p be the corresponding nodes in the pseudo tree. There are exactly three situations which can arise : The non-trivial subspace of y t and z t and hence x t are the same. In this case, the allowed dependence of the error on β plays no role. The only relevant factor is α and the analysis is similar to the analysis in case of abelian groups. The non-trivial subspace of y t and z t are such that neither is contained within the other. In this case, the non-trivial subspace of x t has strictly bigger dimension than those y t or z t. It is here that the dependence of the error on β plays a role. In fact, because we allow the dependence on β to supersede any dependence on α, we can bound the error easily. The only thing we need to show is that the norm of x t on its non-trivial subspace is not very close to 1 which we manage to show easily. The non-trivial subspace of y t is properly contained in that of z t (or vice-versa). In this situation, it is possible that labeling of x t has the same norm on its non-trivial subspace as z t, yet L(x p ) L(x t ) W > i L(z p ) L(z t ) W where W is the non-trivial subspace of z t and x t. What is more concerning is that one can have a series of nodes (in the true tree), call them x 0,..., x m and y 1,..., y m such that x i has two children, x i 1 and y i. Also, for all i, the non-trivial subspace of L(y i ) is properly contained in the nontrivial subspace of L(x 0 ). The way around in this situation is to actually do a global analysis of the error incurred by the chain as a whole rather than trying to do it on a per node basis. The proof uses ideas from the Key Convegence Lemma in [1]. Claim 5.5: Let A be a label of a node in the true tree of a width w permutation branching program. Let V be the subspace of C m such that x V, x A = x. Let V be the orthogonal space of V. Then, for all x V x A 2 (1 4 w ) x 2 This proof of the above claim uses decomposition of regular representations into its irreducible components followed by application of Schur s lemma. We do not elaborate on the proof here. This claim is analogous to Claim 3.7 for abelian groups. In particular, this claim allows us to prove the first case (i.e. the non-trivial subspace of y t and z t is the same as that of x t ) for regular branching programs exactly in the same way as the proof goes for abelian groups. We do not elaborate here on the proof of the second and the third cases. VI. CHANGES FOR THE ANALYSIS OF REGULAR BRANCHING PROGRAMS We now sketch the changes in the analysis for regular branching programs. The important observation that will allow some of the techniques we developed for permutation branching programs is that by definition the number of edges coming in and going out of any node in a regular branching program is 2. Thus, by Hall s theorem, the edges between any two layers can be decomposed into two disjoint matchings. This means that labelings in the true tree for a regular branching program correspond to some permutation branching program. In particular, an analogue of Claim 5.5 will hold for regular branching programs as well. The major point of departure from the analysis of permutation branching programs will be the following : If y was in the trivial subspace of L(x t ) for a node x t in the true tree, then for a permutation branching program, y (L(x t ) L(x p )) = 0. For regular branching programs, at this stage, we can only bound y (L(x t ) L(x p )) 2 by O(λ log n 2 w8 ). This gives us the aforementioned seed length and it is plausible that improving this bound will immediately improve the seed length for regular branching programs. We also mention that for getting the current bound on the seed length for regular branching programs, we do not need to go through the full analysis of the third case

Pseudorandomness for permutation and regular branching programs

Pseudorandomness for permutation and regular branching programs Pseudorandomness for permutation and regular branching programs Anindya De March 6, 2013 Abstract In this paper, we prove the following two results about the INW pseudorandom generator It fools constant

More information

PRGs for space-bounded computation: INW, Nisan

PRGs for space-bounded computation: INW, Nisan 0368-4283: Space-Bounded Computation 15/5/2018 Lecture 9 PRGs for space-bounded computation: INW, Nisan Amnon Ta-Shma and Dean Doron 1 PRGs Definition 1. Let C be a collection of functions C : Σ n {0,

More information

Pseudorandom Generators for Group Products

Pseudorandom Generators for Group Products Pseudorandom Generators for Group Products preliminary version Michal Koucký, Prajakta Nimbhorkar and Pavel Pudlák July 16, 2010 Abstract We prove that the pseudorandom generator introduced in [INW94]

More information

IITM-CS6845: Theory Toolkit February 3, 2012

IITM-CS6845: Theory Toolkit February 3, 2012 IITM-CS6845: Theory Toolkit February 3, 2012 Lecture 4 : Derandomizing the logspace algorithm for s-t connectivity Lecturer: N S Narayanaswamy Scribe: Mrinal Kumar Lecture Plan:In this lecture, we will

More information

On Recycling the Randomness of the States in Space Bounded Computation

On Recycling the Randomness of the States in Space Bounded Computation On Recycling the Randomness of the States in Space Bounded Computation Preliminary Version Ran Raz Omer Reingold Abstract Let M be a logarithmic space Turing machine (or a polynomial width branching program)

More information

Representation Theory. Ricky Roy Math 434 University of Puget Sound

Representation Theory. Ricky Roy Math 434 University of Puget Sound Representation Theory Ricky Roy Math 434 University of Puget Sound May 2, 2010 Introduction In our study of group theory, we set out to classify all distinct groups of a given order up to isomorphism.

More information

Notes on Complexity Theory Last updated: December, Lecture 27

Notes on Complexity Theory Last updated: December, Lecture 27 Notes on Complexity Theory Last updated: December, 2011 Jonathan Katz Lecture 27 1 Space-Bounded Derandomization We now discuss derandomization of space-bounded algorithms. Here non-trivial results can

More information

Representations. 1 Basic definitions

Representations. 1 Basic definitions Representations 1 Basic definitions If V is a k-vector space, we denote by Aut V the group of k-linear isomorphisms F : V V and by End V the k-vector space of k-linear maps F : V V. Thus, if V = k n, then

More information

On Pseudorandomness w.r.t Deterministic Observers

On Pseudorandomness w.r.t Deterministic Observers On Pseudorandomness w.r.t Deterministic Observers Oded Goldreich Department of Computer Science Weizmann Institute of Science Rehovot, Israel. oded@wisdom.weizmann.ac.il Avi Wigderson The Hebrew University,

More information

Math 594. Solutions 5

Math 594. Solutions 5 Math 594. Solutions 5 Book problems 6.1: 7. Prove that subgroups and quotient groups of nilpotent groups are nilpotent (your proof should work for infinite groups). Give an example of a group G which possesses

More information

Pseudorandom Generators for Regular Branching Programs

Pseudorandom Generators for Regular Branching Programs Pseudorandom Generators for Regular Branching Programs Mark Braverman Anup Rao Ran Raz Amir Yehudayoff Abstract We give new pseudorandom generators for regular read-once branching programs of small width.

More information

Two Comments on Targeted Canonical Derandomizers

Two Comments on Targeted Canonical Derandomizers Two Comments on Targeted Canonical Derandomizers Oded Goldreich Department of Computer Science Weizmann Institute of Science Rehovot, Israel. oded.goldreich@weizmann.ac.il April 8, 2011 Abstract We revisit

More information

Balls and Bins: Smaller Hash Families and Faster Evaluation

Balls and Bins: Smaller Hash Families and Faster Evaluation Balls and Bins: Smaller Hash Families and Faster Evaluation L. Elisa Celis Omer Reingold Gil Segev Udi Wieder April 22, 2011 Abstract A fundamental fact in the analysis of randomized algorithm is that

More information

Pseudorandom Generators for Regular Branching Programs

Pseudorandom Generators for Regular Branching Programs Pseudorandom Generators for Regular Branching Programs Mark Braverman Anup Rao Ran Raz Amir Yehudayoff Abstract We give new pseudorandom generators for regular read-once branching programs of small width.

More information

Problem Set 2. Assigned: Mon. November. 23, 2015

Problem Set 2. Assigned: Mon. November. 23, 2015 Pseudorandomness Prof. Salil Vadhan Problem Set 2 Assigned: Mon. November. 23, 2015 Chi-Ning Chou Index Problem Progress 1 SchwartzZippel lemma 1/1 2 Robustness of the model 1/1 3 Zero error versus 1-sided

More information

REPRESENTATION THEORY OF S n

REPRESENTATION THEORY OF S n REPRESENTATION THEORY OF S n EVAN JENKINS Abstract. These are notes from three lectures given in MATH 26700, Introduction to Representation Theory of Finite Groups, at the University of Chicago in November

More information

Pseudorandomness for Regular Branching Programs via Fourier Analysis

Pseudorandomness for Regular Branching Programs via Fourier Analysis Pseudorandomness for Regular Branching Programs via Fourier Analysis The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters Citation

More information

: On the P vs. BPP problem. 18/12/16 Lecture 10

: On the P vs. BPP problem. 18/12/16 Lecture 10 03684155: On the P vs. BPP problem. 18/12/16 Lecture 10 Natural proofs Amnon Ta-Shma and Dean Doron 1 Natural proofs The ultimate goal we have is separating classes (or proving they are equal if they are).

More information

Lecture 3 Small bias with respect to linear tests

Lecture 3 Small bias with respect to linear tests 03683170: Expanders, Pseudorandomness and Derandomization 3/04/16 Lecture 3 Small bias with respect to linear tests Amnon Ta-Shma and Dean Doron 1 The Fourier expansion 1.1 Over general domains Let G be

More information

Expander Construction in VNC 1

Expander Construction in VNC 1 Expander Construction in VNC 1 Sam Buss joint work with Valentine Kabanets, Antonina Kolokolova & Michal Koucký Prague Workshop on Bounded Arithmetic November 2-3, 2017 Talk outline I. Combinatorial construction

More information

REPRESENTATION THEORY NOTES FOR MATH 4108 SPRING 2012

REPRESENTATION THEORY NOTES FOR MATH 4108 SPRING 2012 REPRESENTATION THEORY NOTES FOR MATH 4108 SPRING 2012 JOSEPHINE YU This note will cover introductory material on representation theory, mostly of finite groups. The main references are the books of Serre

More information

15-855: Intensive Intro to Complexity Theory Spring Lecture 16: Nisan s PRG for small space

15-855: Intensive Intro to Complexity Theory Spring Lecture 16: Nisan s PRG for small space 15-855: Intensive Intro to Complexity Theory Spring 2009 Lecture 16: Nisan s PRG for small space For the next few lectures we will study derandomization. In algorithms classes one often sees clever randomized

More information

Pseudorandom Generators

Pseudorandom Generators 8 Pseudorandom Generators Great Ideas in Theoretical Computer Science Saarland University, Summer 2014 andomness is one of the fundamental computational resources and appears everywhere. In computer science,

More information

Pseudorandomness for Permutation Branching Programs Without the Group Theory

Pseudorandomness for Permutation Branching Programs Without the Group Theory Electronic Colloquium on Computational Complexity, Report No. 83 (01) Pseudorandomness for Permutation Branching Programs Without the Group Theory Thomas Steinke Harvard School of Engineering and Applied

More information

1 Introduction The relation between randomized computations with one-sided error and randomized computations with two-sided error is one of the most i

1 Introduction The relation between randomized computations with one-sided error and randomized computations with two-sided error is one of the most i Improved derandomization of BPP using a hitting set generator Oded Goldreich Department of Computer Science Weizmann Institute of Science Rehovot, Israel. oded@wisdom.weizmann.ac.il Avi Wigderson Institute

More information

JORDAN NORMAL FORM. Contents Introduction 1 Jordan Normal Form 1 Conclusion 5 References 5

JORDAN NORMAL FORM. Contents Introduction 1 Jordan Normal Form 1 Conclusion 5 References 5 JORDAN NORMAL FORM KATAYUN KAMDIN Abstract. This paper outlines a proof of the Jordan Normal Form Theorem. First we show that a complex, finite dimensional vector space can be decomposed into a direct

More information

CSC 2429 Approaches to the P vs. NP Question and Related Complexity Questions Lecture 2: Switching Lemma, AC 0 Circuit Lower Bounds

CSC 2429 Approaches to the P vs. NP Question and Related Complexity Questions Lecture 2: Switching Lemma, AC 0 Circuit Lower Bounds CSC 2429 Approaches to the P vs. NP Question and Related Complexity Questions Lecture 2: Switching Lemma, AC 0 Circuit Lower Bounds Lecturer: Toniann Pitassi Scribe: Robert Robere Winter 2014 1 Switching

More information

Balls and Bins: Smaller Hash Families and Faster Evaluation

Balls and Bins: Smaller Hash Families and Faster Evaluation Balls and Bins: Smaller Hash Families and Faster Evaluation L. Elisa Celis Omer Reingold Gil Segev Udi Wieder May 1, 2013 Abstract A fundamental fact in the analysis of randomized algorithms is that when

More information

k-protected VERTICES IN BINARY SEARCH TREES

k-protected VERTICES IN BINARY SEARCH TREES k-protected VERTICES IN BINARY SEARCH TREES MIKLÓS BÓNA Abstract. We show that for every k, the probability that a randomly selected vertex of a random binary search tree on n nodes is at distance k from

More information

The Gelfand-Tsetlin Basis (Too Many Direct Sums, and Also a Graph)

The Gelfand-Tsetlin Basis (Too Many Direct Sums, and Also a Graph) The Gelfand-Tsetlin Basis (Too Many Direct Sums, and Also a Graph) David Grabovsky June 13, 2018 Abstract The symmetric groups S n, consisting of all permutations on a set of n elements, naturally contain

More information

Lecture 7: ɛ-biased and almost k-wise independent spaces

Lecture 7: ɛ-biased and almost k-wise independent spaces Lecture 7: ɛ-biased and almost k-wise independent spaces Topics in Complexity Theory and Pseudorandomness (pring 203) Rutgers University wastik Kopparty cribes: Ben Lund, Tim Naumovitz Today we will see

More information

MAT 445/ INTRODUCTION TO REPRESENTATION THEORY

MAT 445/ INTRODUCTION TO REPRESENTATION THEORY MAT 445/1196 - INTRODUCTION TO REPRESENTATION THEORY CHAPTER 1 Representation Theory of Groups - Algebraic Foundations 1.1 Basic definitions, Schur s Lemma 1.2 Tensor products 1.3 Unitary representations

More information

NOTES ON POLYNOMIAL REPRESENTATIONS OF GENERAL LINEAR GROUPS

NOTES ON POLYNOMIAL REPRESENTATIONS OF GENERAL LINEAR GROUPS NOTES ON POLYNOMIAL REPRESENTATIONS OF GENERAL LINEAR GROUPS MARK WILDON Contents 1. Definition of polynomial representations 1 2. Weight spaces 3 3. Definition of the Schur functor 7 4. Appendix: some

More information

Representation theory

Representation theory Representation theory Dr. Stuart Martin 2. Chapter 2: The Okounkov-Vershik approach These guys are Andrei Okounkov and Anatoly Vershik. The two papers appeared in 96 and 05. Here are the main steps: branching

More information

Lecture 3: Randomness in Computation

Lecture 3: Randomness in Computation Great Ideas in Theoretical Computer Science Summer 2013 Lecture 3: Randomness in Computation Lecturer: Kurt Mehlhorn & He Sun Randomness is one of basic resources and appears everywhere. In computer science,

More information

Introduction to Group Theory

Introduction to Group Theory Chapter 10 Introduction to Group Theory Since symmetries described by groups play such an important role in modern physics, we will take a little time to introduce the basic structure (as seen by a physicist)

More information

Last time, we described a pseudorandom generator that stretched its truly random input by one. If f is ( 1 2

Last time, we described a pseudorandom generator that stretched its truly random input by one. If f is ( 1 2 CMPT 881: Pseudorandomness Prof. Valentine Kabanets Lecture 20: N W Pseudorandom Generator November 25, 2004 Scribe: Ladan A. Mahabadi 1 Introduction In this last lecture of the course, we ll discuss the

More information

Lecture 4: LMN Learning (Part 2)

Lecture 4: LMN Learning (Part 2) CS 294-114 Fine-Grained Compleity and Algorithms Sept 8, 2015 Lecture 4: LMN Learning (Part 2) Instructor: Russell Impagliazzo Scribe: Preetum Nakkiran 1 Overview Continuing from last lecture, we will

More information

Fourier analysis of boolean functions in quantum computation

Fourier analysis of boolean functions in quantum computation Fourier analysis of boolean functions in quantum computation Ashley Montanaro Centre for Quantum Information and Foundations, Department of Applied Mathematics and Theoretical Physics, University of Cambridge

More information

Tree sets. Reinhard Diestel

Tree sets. Reinhard Diestel 1 Tree sets Reinhard Diestel Abstract We study an abstract notion of tree structure which generalizes treedecompositions of graphs and matroids. Unlike tree-decompositions, which are too closely linked

More information

INTRODUCTION TO REPRESENTATION THEORY AND CHARACTERS

INTRODUCTION TO REPRESENTATION THEORY AND CHARACTERS INTRODUCTION TO REPRESENTATION THEORY AND CHARACTERS HANMING ZHANG Abstract. In this paper, we will first build up a background for representation theory. We will then discuss some interesting topics in

More information

Lecture 21: P vs BPP 2

Lecture 21: P vs BPP 2 Advanced Complexity Theory Spring 206 Prof. Dana Moshkovitz Lecture 2: P vs BPP 2 Overview In the previous lecture, we began our discussion of pseudorandomness. We presented the Blum- Micali definition

More information

Since G is a compact Lie group, we can apply Schur orthogonality to see that G χ π (g) 2 dg =

Since G is a compact Lie group, we can apply Schur orthogonality to see that G χ π (g) 2 dg = Problem 1 Show that if π is an irreducible representation of a compact lie group G then π is also irreducible. Give an example of a G and π such that π = π, and another for which π π. Is this true for

More information

Lecture 17. In this lecture, we will continue our discussion on randomization.

Lecture 17. In this lecture, we will continue our discussion on randomization. ITCS:CCT09 : Computational Complexity Theory May 11, 2009 Lecturer: Jayalal Sarma M.N. Lecture 17 Scribe: Hao Song In this lecture, we will continue our discussion on randomization. 1 BPP and the Polynomial

More information

(d) Since we can think of isometries of a regular 2n-gon as invertible linear operators on R 2, we get a 2-dimensional representation of G for

(d) Since we can think of isometries of a regular 2n-gon as invertible linear operators on R 2, we get a 2-dimensional representation of G for Solutions to Homework #7 0. Prove that [S n, S n ] = A n for every n 2 (where A n is the alternating group). Solution: Since [f, g] = f 1 g 1 fg is an even permutation for all f, g S n and since A n is

More information

L(C G (x) 0 ) c g (x). Proof. Recall C G (x) = {g G xgx 1 = g} and c g (x) = {X g Ad xx = X}. In general, it is obvious that

L(C G (x) 0 ) c g (x). Proof. Recall C G (x) = {g G xgx 1 = g} and c g (x) = {X g Ad xx = X}. In general, it is obvious that ALGEBRAIC GROUPS 61 5. Root systems and semisimple Lie algebras 5.1. Characteristic 0 theory. Assume in this subsection that chark = 0. Let me recall a couple of definitions made earlier: G is called reductive

More information

Spectral Graph Theory Lecture 2. The Laplacian. Daniel A. Spielman September 4, x T M x. ψ i = arg min

Spectral Graph Theory Lecture 2. The Laplacian. Daniel A. Spielman September 4, x T M x. ψ i = arg min Spectral Graph Theory Lecture 2 The Laplacian Daniel A. Spielman September 4, 2015 Disclaimer These notes are not necessarily an accurate representation of what happened in class. The notes written before

More information

Graph coloring, perfect graphs

Graph coloring, perfect graphs Lecture 5 (05.04.2013) Graph coloring, perfect graphs Scribe: Tomasz Kociumaka Lecturer: Marcin Pilipczuk 1 Introduction to graph coloring Definition 1. Let G be a simple undirected graph and k a positive

More information

MIT Algebraic techniques and semidefinite optimization May 9, Lecture 21. Lecturer: Pablo A. Parrilo Scribe:???

MIT Algebraic techniques and semidefinite optimization May 9, Lecture 21. Lecturer: Pablo A. Parrilo Scribe:??? MIT 6.972 Algebraic techniques and semidefinite optimization May 9, 2006 Lecture 2 Lecturer: Pablo A. Parrilo Scribe:??? In this lecture we study techniques to exploit the symmetry that can be present

More information

REPRESENTATIONS AND CHARACTERS OF FINITE GROUPS

REPRESENTATIONS AND CHARACTERS OF FINITE GROUPS SUMMER PROJECT REPRESENTATIONS AND CHARACTERS OF FINITE GROUPS September 29, 2017 Miriam Norris School of Mathematics Contents 0.1 Introduction........................................ 2 0.2 Representations

More information

is an isomorphism, and V = U W. Proof. Let u 1,..., u m be a basis of U, and add linearly independent

is an isomorphism, and V = U W. Proof. Let u 1,..., u m be a basis of U, and add linearly independent Lecture 4. G-Modules PCMI Summer 2015 Undergraduate Lectures on Flag Varieties Lecture 4. The categories of G-modules, mostly for finite groups, and a recipe for finding every irreducible G-module of a

More information

Representation Theory

Representation Theory Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 Paper 1, Section II 19I 93 (a) Define the derived subgroup, G, of a finite group G. Show that if χ is a linear character

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

IRREDUCIBLE REPRESENTATIONS OF SEMISIMPLE LIE ALGEBRAS. Contents

IRREDUCIBLE REPRESENTATIONS OF SEMISIMPLE LIE ALGEBRAS. Contents IRREDUCIBLE REPRESENTATIONS OF SEMISIMPLE LIE ALGEBRAS NEEL PATEL Abstract. The goal of this paper is to study the irreducible representations of semisimple Lie algebras. We will begin by considering two

More information

Lecture 7: Passive Learning

Lecture 7: Passive Learning CS 880: Advanced Complexity Theory 2/8/2008 Lecture 7: Passive Learning Instructor: Dieter van Melkebeek Scribe: Tom Watson In the previous lectures, we studied harmonic analysis as a tool for analyzing

More information

Bounded Arithmetic, Expanders, and Monotone Propositional Proofs

Bounded Arithmetic, Expanders, and Monotone Propositional Proofs Bounded Arithmetic, Expanders, and Monotone Propositional Proofs joint work with Valentine Kabanets, Antonina Kolokolova & Michal Koucký Takeuti Symposium on Advances in Logic Kobe, Japan September 20,

More information

Compute the Fourier transform on the first register to get x {0,1} n x 0.

Compute the Fourier transform on the first register to get x {0,1} n x 0. CS 94 Recursive Fourier Sampling, Simon s Algorithm /5/009 Spring 009 Lecture 3 1 Review Recall that we can write any classical circuit x f(x) as a reversible circuit R f. We can view R f as a unitary

More information

YOUNG TABLEAUX AND THE REPRESENTATIONS OF THE SYMMETRIC GROUP

YOUNG TABLEAUX AND THE REPRESENTATIONS OF THE SYMMETRIC GROUP YOUNG TABLEAUX AND THE REPRESENTATIONS OF THE SYMMETRIC GROUP YUFEI ZHAO ABSTRACT We explore an intimate connection between Young tableaux and representations of the symmetric group We describe the construction

More information

An Algebraic View of the Relation between Largest Common Subtrees and Smallest Common Supertrees

An Algebraic View of the Relation between Largest Common Subtrees and Smallest Common Supertrees An Algebraic View of the Relation between Largest Common Subtrees and Smallest Common Supertrees Francesc Rosselló 1, Gabriel Valiente 2 1 Department of Mathematics and Computer Science, Research Institute

More information

1 Adjacency matrix and eigenvalues

1 Adjacency matrix and eigenvalues CSC 5170: Theory of Computational Complexity Lecture 7 The Chinese University of Hong Kong 1 March 2010 Our objective of study today is the random walk algorithm for deciding if two vertices in an undirected

More information

(1) A frac = b : a, b A, b 0. We can define addition and multiplication of fractions as we normally would. a b + c d

(1) A frac = b : a, b A, b 0. We can define addition and multiplication of fractions as we normally would. a b + c d The Algebraic Method 0.1. Integral Domains. Emmy Noether and others quickly realized that the classical algebraic number theory of Dedekind could be abstracted completely. In particular, rings of integers

More information

Randomness and non-uniformity

Randomness and non-uniformity Randomness and non-uniformity JASS 2006 Course 1: Proofs and Computers Felix Weninger TU München April 2006 Outline Randomized computation 1 Randomized computation 2 Computation with advice Non-uniform

More information

Topics in linear algebra

Topics in linear algebra Chapter 6 Topics in linear algebra 6.1 Change of basis I want to remind you of one of the basic ideas in linear algebra: change of basis. Let F be a field, V and W be finite dimensional vector spaces over

More information

Lecture 9: Counting Matchings

Lecture 9: Counting Matchings Counting and Sampling Fall 207 Lecture 9: Counting Matchings Lecturer: Shayan Oveis Gharan October 20 Disclaimer: These notes have not been subjected to the usual scrutiny reserved for formal publications.

More information

FFTs in Graphics and Vision. Groups and Representations

FFTs in Graphics and Vision. Groups and Representations FFTs in Graphics and Vision Groups and Representations Outline Groups Representations Schur s Lemma Correlation Groups A group is a set of elements G with a binary operation (often denoted ) such that

More information

PSRGs via Random Walks on Graphs

PSRGs via Random Walks on Graphs Spectral Graph Theory Lecture 11 PSRGs via Random Walks on Graphs Daniel A. Spielman October 3, 2012 11.1 Overview There has been a lot of work on the design of Pseudo-Random Number Generators (PSRGs)

More information

CHAPTER 6. Representations of compact groups

CHAPTER 6. Representations of compact groups CHAPTER 6 Representations of compact groups Throughout this chapter, denotes a compact group. 6.1. Examples of compact groups A standard theorem in elementary analysis says that a subset of C m (m a positive

More information

Chapter 5 The Witness Reduction Technique

Chapter 5 The Witness Reduction Technique Outline Chapter 5 The Technique Luke Dalessandro Rahul Krishna December 6, 2006 Outline Part I: Background Material Part II: Chapter 5 Outline of Part I 1 Notes On Our NP Computation Model NP Machines

More information

Connectedness. Proposition 2.2. The following are equivalent for a topological space (X, T ).

Connectedness. Proposition 2.2. The following are equivalent for a topological space (X, T ). Connectedness 1 Motivation Connectedness is the sort of topological property that students love. Its definition is intuitive and easy to understand, and it is a powerful tool in proofs of well-known results.

More information

6.842 Randomness and Computation March 3, Lecture 8

6.842 Randomness and Computation March 3, Lecture 8 6.84 Randomness and Computation March 3, 04 Lecture 8 Lecturer: Ronitt Rubinfeld Scribe: Daniel Grier Useful Linear Algebra Let v = (v, v,..., v n ) be a non-zero n-dimensional row vector and P an n n

More information

Definitions. Notations. Injective, Surjective and Bijective. Divides. Cartesian Product. Relations. Equivalence Relations

Definitions. Notations. Injective, Surjective and Bijective. Divides. Cartesian Product. Relations. Equivalence Relations Page 1 Definitions Tuesday, May 8, 2018 12:23 AM Notations " " means "equals, by definition" the set of all real numbers the set of integers Denote a function from a set to a set by Denote the image of

More information

The Strong Largeur d Arborescence

The Strong Largeur d Arborescence The Strong Largeur d Arborescence Rik Steenkamp (5887321) November 12, 2013 Master Thesis Supervisor: prof.dr. Monique Laurent Local Supervisor: prof.dr. Alexander Schrijver KdV Institute for Mathematics

More information

Complexity Theory VU , SS The Polynomial Hierarchy. Reinhard Pichler

Complexity Theory VU , SS The Polynomial Hierarchy. Reinhard Pichler Complexity Theory Complexity Theory VU 181.142, SS 2018 6. The Polynomial Hierarchy Reinhard Pichler Institut für Informationssysteme Arbeitsbereich DBAI Technische Universität Wien 15 May, 2018 Reinhard

More information

Outline. Complexity Theory EXACT TSP. The Class DP. Definition. Problem EXACT TSP. Complexity of EXACT TSP. Proposition VU 181.

Outline. Complexity Theory EXACT TSP. The Class DP. Definition. Problem EXACT TSP. Complexity of EXACT TSP. Proposition VU 181. Complexity Theory Complexity Theory Outline Complexity Theory VU 181.142, SS 2018 6. The Polynomial Hierarchy Reinhard Pichler Institut für Informationssysteme Arbeitsbereich DBAI Technische Universität

More information

Optimal Hitting Sets for Combinatorial Shapes

Optimal Hitting Sets for Combinatorial Shapes Optimal Hitting Sets for Combinatorial Shapes Aditya Bhaskara Devendra Desai Srikanth Srinivasan November 5, 2012 Abstract We consider the problem of constructing explicit Hitting sets for Combinatorial

More information

Representations of quivers

Representations of quivers Representations of quivers Gwyn Bellamy October 13, 215 1 Quivers Let k be a field. Recall that a k-algebra is a k-vector space A with a bilinear map A A A making A into a unital, associative ring. Notice

More information

Lecture 23: Alternation vs. Counting

Lecture 23: Alternation vs. Counting CS 710: Complexity Theory 4/13/010 Lecture 3: Alternation vs. Counting Instructor: Dieter van Melkebeek Scribe: Jeff Kinne & Mushfeq Khan We introduced counting complexity classes in the previous lecture

More information

Spectra of Semidirect Products of Cyclic Groups

Spectra of Semidirect Products of Cyclic Groups Spectra of Semidirect Products of Cyclic Groups Nathan Fox 1 University of Minnesota-Twin Cities Abstract The spectrum of a graph is the set of eigenvalues of its adjacency matrix A group, together with

More information

Real representations

Real representations Real representations 1 Definition of a real representation Definition 1.1. Let V R be a finite dimensional real vector space. A real representation of a group G is a homomorphism ρ VR : G Aut V R, where

More information

Testing Equality in Communication Graphs

Testing Equality in Communication Graphs Electronic Colloquium on Computational Complexity, Report No. 86 (2016) Testing Equality in Communication Graphs Noga Alon Klim Efremenko Benny Sudakov Abstract Let G = (V, E) be a connected undirected

More information

5 Irreducible representations

5 Irreducible representations Physics 129b Lecture 8 Caltech, 01/1/19 5 Irreducible representations 5.5 Regular representation and its decomposition into irreps To see that the inequality is saturated, we need to consider the so-called

More information

8. Prime Factorization and Primary Decompositions

8. Prime Factorization and Primary Decompositions 70 Andreas Gathmann 8. Prime Factorization and Primary Decompositions 13 When it comes to actual computations, Euclidean domains (or more generally principal ideal domains) are probably the nicest rings

More information

A PROOF OF BURNSIDE S p a q b THEOREM

A PROOF OF BURNSIDE S p a q b THEOREM A PROOF OF BURNSIDE S p a q b THEOREM OBOB Abstract. We prove that if p and q are prime, then any group of order p a q b is solvable. Throughout this note, denote by A the set of algebraic numbers. We

More information

U.C. Berkeley Better-than-Worst-Case Analysis Handout 3 Luca Trevisan May 24, 2018

U.C. Berkeley Better-than-Worst-Case Analysis Handout 3 Luca Trevisan May 24, 2018 U.C. Berkeley Better-than-Worst-Case Analysis Handout 3 Luca Trevisan May 24, 2018 Lecture 3 In which we show how to find a planted clique in a random graph. 1 Finding a Planted Clique We will analyze

More information

CS168: The Modern Algorithmic Toolbox Lecture #8: PCA and the Power Iteration Method

CS168: The Modern Algorithmic Toolbox Lecture #8: PCA and the Power Iteration Method CS168: The Modern Algorithmic Toolbox Lecture #8: PCA and the Power Iteration Method Tim Roughgarden & Gregory Valiant April 15, 015 This lecture began with an extended recap of Lecture 7. Recall that

More information

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

More information

Lecture 6: Random Walks versus Independent Sampling

Lecture 6: Random Walks versus Independent Sampling Spectral Graph Theory and Applications WS 011/01 Lecture 6: Random Walks versus Independent Sampling Lecturer: Thomas Sauerwald & He Sun For many problems it is necessary to draw samples from some distribution

More information

Almost k-wise vs. k-wise independent permutations, and uniformity for general group actions

Almost k-wise vs. k-wise independent permutations, and uniformity for general group actions Almost k-wise vs. k-wise independent permutations, and uniformity for general group actions Noga Alon Tel-Aviv University and IAS, Princeton nogaa@tau.ac.il Shachar Lovett IAS, Princeton slovett@math.ias.edu

More information

Length-Increasing Reductions for PSPACE-Completeness

Length-Increasing Reductions for PSPACE-Completeness Length-Increasing Reductions for PSPACE-Completeness John M. Hitchcock 1 and A. Pavan 2 1 Department of Computer Science, University of Wyoming. jhitchco@cs.uwyo.edu 2 Department of Computer Science, Iowa

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

CMPUT 675: Approximation Algorithms Fall 2014

CMPUT 675: Approximation Algorithms Fall 2014 CMPUT 675: Approximation Algorithms Fall 204 Lecture 25 (Nov 3 & 5): Group Steiner Tree Lecturer: Zachary Friggstad Scribe: Zachary Friggstad 25. Group Steiner Tree In this problem, we are given a graph

More information

Supplemental for Spectral Algorithm For Latent Tree Graphical Models

Supplemental for Spectral Algorithm For Latent Tree Graphical Models Supplemental for Spectral Algorithm For Latent Tree Graphical Models Ankur P. Parikh, Le Song, Eric P. Xing The supplemental contains 3 main things. 1. The first is network plots of the latent variable

More information

CONSTRAINED PERCOLATION ON Z 2

CONSTRAINED PERCOLATION ON Z 2 CONSTRAINED PERCOLATION ON Z 2 ZHONGYANG LI Abstract. We study a constrained percolation process on Z 2, and prove the almost sure nonexistence of infinite clusters and contours for a large class of probability

More information

A note on monotone real circuits

A note on monotone real circuits A note on monotone real circuits Pavel Hrubeš and Pavel Pudlák March 14, 2017 Abstract We show that if a Boolean function f : {0, 1} n {0, 1} can be computed by a monotone real circuit of size s using

More information

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra D. R. Wilkins Contents 3 Topics in Commutative Algebra 2 3.1 Rings and Fields......................... 2 3.2 Ideals...............................

More information

Graph isomorphism, the hidden subgroup problem and identifying quantum states

Graph isomorphism, the hidden subgroup problem and identifying quantum states 1 Graph isomorphism, the hidden subgroup problem and identifying quantum states Pranab Sen NEC Laboratories America, Princeton, NJ, U.S.A. Joint work with Sean Hallgren and Martin Rötteler. Quant-ph 0511148:

More information

The Banach-Tarski paradox

The Banach-Tarski paradox The Banach-Tarski paradox 1 Non-measurable sets In these notes I want to present a proof of the Banach-Tarski paradox, a consequence of the axiom of choice that shows us that a naive understanding of the

More information

GROUPS AS GRAPHS. W. B. Vasantha Kandasamy Florentin Smarandache

GROUPS AS GRAPHS. W. B. Vasantha Kandasamy Florentin Smarandache GROUPS AS GRAPHS W. B. Vasantha Kandasamy Florentin Smarandache 009 GROUPS AS GRAPHS W. B. Vasantha Kandasamy e-mail: vasanthakandasamy@gmail.com web: http://mat.iitm.ac.in/~wbv www.vasantha.in Florentin

More information

The matrix approach for abstract argumentation frameworks

The matrix approach for abstract argumentation frameworks The matrix approach for abstract argumentation frameworks Claudette CAYROL, Yuming XU IRIT Report RR- -2015-01- -FR February 2015 Abstract The matrices and the operation of dual interchange are introduced

More information

Notes on Complexity Theory Last updated: November, Lecture 10

Notes on Complexity Theory Last updated: November, Lecture 10 Notes on Complexity Theory Last updated: November, 2015 Lecture 10 Notes by Jonathan Katz, lightly edited by Dov Gordon. 1 Randomized Time Complexity 1.1 How Large is BPP? We know that P ZPP = RP corp

More information