Pseudorandomness for permutation and regular branching programs

Size: px
Start display at page:

Download "Pseudorandomness for permutation and regular branching programs"

Transcription

1 Pseudorandomness for permutation and regular branching programs Anindya De March 6, 2013 Abstract In this paper, we prove the following two results about the INW pseudorandom generator It fools constant width permutation branching programs with a seed of length O(log n log(1/ɛ)). It fools constant width regular branching programs with a seed of length O(log n (log log n+ log(1/ɛ)). The results match the recent results of Koucký et al. (STOC 2011) and Braverman et al. and Brody and Verbin (FOCS 2010). We improve the dependence of the seed on the width for permutation branching programs. Probably, far more importantly, our work proceeds by analyzing the singular values of the stochastic matrices that arise in the transitions of the branching program which hopefully makes its ambit bigger. As a corollary of our techniques, we present new results on the small biased spaces for group products problem [MZ09]. We get a pseudorandom generator with seed length log n (log G + log(1/ɛ)). Previously, using the result of Koucký et al., it was possible to get a seed length of log n ( G O(1) + log(1/ɛ)) for this problem. Keywords: Pseudorandom generators, Permutation branching programs, expander products Computer Science Division, University of California, Berkeley, CA, USA. anindya@cs.berkeley.edu.

2 1 Introduction One of the most fundamental questions in complexity theory is whether one can save on computational resources like space and time by using randomness. While it is known that randomness is indispensable in settings like cryptography and distributed computation, a long line of research [Yao82, BM84, NW94, IW97] has shown that assuming appropriate lower bounds on the circuit complexity of some functions, one can derandomize every randomized polynomial time algorithm i.e. show P = BPP. Unfortunately, it has also been shown that any non-trivial derandomization of BPP implies circuit lower bounds [KI04] which seem out of reach of the present state of the art. Thus, getting unconditional derandomization of complexity classes like BPP and MA look out of reach for current techniques. This has led to a shift of focus towards derandomization of low-level complexity classes where one can hope to get uncondtional results. One of the most important problems in this line of research is to derandomize bounded space computation. The ultimate aim of this line of research is to prove RL = L i.e., to show that any computation that can be solved in randomized logspace can be simulated in deterministic logspace. Savitch [Sav70] showed that RL NL L 2 i.e. randomized logspace computation and in fact, non-deterministic logspace computation can be simulated deterministically in O(log 2 n) space. Nisan [Nis92] also showed that RL L 2 by constructing a pseudorandom generator (PRG) which can stretch a seed of length O(log 2 n) into n bits that fools logspace machines. In fact, Nisan s PRG fools read-once branching programs (which we define next) of polynomial length and width. Definition 1.1 A read-once branching program (BP) of width w and length n is a directed multilayer graph with n+1 layers such that each layer has w nodes with edges going from the i th layer to the (i + 1) th layer (0 i n 1). For every node (except those in the last layer), there are exactly two edges leaving that node with one marked 0 and the other marked 1. There is a designated start state in the first layer and a set of accepting states in the (n + 1) th layer. Remark 1.2 In this paper, whenever we refer to branching programs, we mean read-once branching programs. We note that if the read-once restriction is not imposed, then in fact, width 5 branching programs capture NC 1 [Bar89]. A BP is said to be a permutation branching program (PBP) if for any layer the transitions corresponding to 0, (resp. 1) are matchings. A BP is said to be a regular branching program (RBP) if the number of edges coming into every node is either 0 or 2. A given BP accepts an input x {0, 1} n if starting from the start state and following the path specified by the input, it ends in one of the accepting states. It is not hard to see that randomized log space computation is a uniform version of BPs with w = n O(1). Coming back to PRGs for branching programs, after [Nis92], several papers [INW94, RR99] improved on some parameters of the construction in [Nis92]. However, improving on the O(log 2 n) seed remained (and continues to remain) open in the following minimal sense: It is not known how to construct a PRG with seed length o(log 2 n) seed with constant error for width 3 BPs. Faced with this difficulty, research was focussed on solving special cases of this problem with better seed lengths ([Lu02],[LRTV09],[GMRZ10]). Attention has also been directed towards getting better seed length when there is some structural restriction on the BPs. In particular, when the branching program is regular, then Braverman et al. [BRRY10] and Brody and Verbin [BV10] constructed pseudorandom generators with seed length O(log n (log log n+log(1/ɛ))) which fool constant width 1

3 branching programs with error ɛ. The dependence of the seed length on width was better for the latter paper which obtained O(log n (log w + log log n + log(1/ɛ))). Koucký, Nimbhorkar and Pudlák [KNP10] improved on this further for the case of permutation branching programs. In particular, for fooling constant width permutation branching programs with error ɛ, they got a seed length of O(log n log(1/ɛ)). In their analysis, they transform this problem into the language of group products 1 and then analyze their construction using basic properties of groups. In this vein, we should also mention that Síma and Zák recently got a breakthrough by constructing a hitting set with seed length O(log n) for width-3 branching programs [vv10]. However, their techniques seem totally disjoint from other works in this line of research. 1.1 Our results We present a pseudorandom generator which ɛ-fools permutation branching programs of length n and width w using a seed of length O(log n (w 8 +log(1/ɛ)). The PRG we use is the INW generator [INW94] (as was in the previous works [BV10, BRRY10, KNP10]). We remark that Koucký et al. obtained a seed length of O(log n (w! + log(1/ɛ)) for fooling permutation branching programs using the same generator. What we consider interesting is that our analysis is based on analyzing spectra of the stochastic matrices that arise in the transitions of the branching program. Thus, we see it is a more combinatorial approach which might be helpful in other contexts as well. This is in contrast to the result of Koucký et al. which is based on a group theoretic approach and is thus difficult to adapt to more combinatorial settings. Our techniques also shows that the IN W generator ɛ-fools regular branching programs of length n and width w using a seed of length O(log n (w log log n + log(1/ɛ)). Our analysis is based on linear algebra in contrast to the analysis of Braverman et al. (which was based on information theoretic ideas) and Brody and Verbin (based on combinatorial ideas). We also consider the small-bias spaces problem for group products, first considered by Meka and Zuckerman in [MZ09]. Both the problem and our results on it are stated in Section 4. We next discuss the INW generator in detail and the main idea behind our improved analysis. 2 Technical overview 2.1 Impagliazzo-Nisan-Wigderson generator The PRG used in this paper is the construction of Impagliazzo, Nisan and Wigderson [INW94] (hereafter referred to as the INW generator). We now describe their construction. First, let us recall the following important fact about construction of expander graphs [RVW00]. Fact 2.1 For any n and λ > 0, there exists graphs on {0, 1} n with degree d = (1/λ) Θ(1) and second eigenvalue λ such that given any vertex x {0, 1} n and an edge label i [d], the i th neighbor of x is computable in n Θ(1) time. The INW generator is defined recursively as follows. Let Γ 0 : {0, 1} t {0, 1} be simply a function which maps a t bit string to its first bit. Assume Γ i 1 : {0, 1} m {0, 1} l. Then Γ i : {0, 1} m+log d {0, 1} 2l is defined as follows. Let x = y z {0, 1} m+log d such that y is m bits long and z is log d 1 we discuss this later 2

4 bits long. Let H be a graph on 2 m vertices constructed using Fact 2.1. Let y be the z th neighbor of y in H. Then Γ i (x) = Γ i 1 (y) Γ i 1 (y ). Here and elsewhere, is used to denote concatenation. From the above, one can easily see that Γ i : {0, 1} t+i log d {0, 1} 2i. As we can put t to be anything, we get that Γ log n : {0, 1} log n log d {0, 1} n. As d = (1/λ) Θ(1), the INW generator stretches a seed of length O(log n log(1/λ)) to n bits. Remark 2.2 It is possible and will in fact be necessary for us (in Section 4) to define the INW generator which produces elements from a bigger alphabet. The construction in this case is as follows : we assume that we want to produce elements from some set G. For some t log G, let Γ 0 : {0, 1} t G be simply a function whose output is the first log G bits and interprets it as an element of G. Assume Γ i 1 : {0, 1} m G l. Then Γ i : {0, 1} m+log d {0, 1} 2l is defined as follows. Let x = y z {0, 1} m+log d such that y is m bits long and z is log d bits long. Let H be a graph on 2 m vertices constructed using Fact 2.1. Let y be the z th neighbor of y in H. Then Γ i (x) = Γ i 1 (y) Γ i 1 (y ). From the above, one can easily see that Γ i : {0, 1} t+i log d G 2i. As we can put t to be anything as long as it is at least log G, we get that Γ log n : {0, 1} log G +log n log d G n. As d = (1/λ) Θ(1), the INW generator stretches a seed of length O(log G + log n log(1/λ)) to n bits. We now get back to the analysis of the INW generator. For the purposes of this discussion, we assume that the INW generator is producing bit strings as opposed to elements in G. 2.2 Analysis of INW generator in terms of stochastic matrices To understand the analysis of the INW generator from [INW94] as well as the improvements in this paper, it is helpful to look at branching programs from the following viewpoint. Assume that the branching program is of width w. Then states in every layer can be numbered from 1 to w. Also for x, y [w] and b {0, 1}, we introduce the notation (x, i, b) (y, i + 1) if there is an edge labelled b going from vertex x in layer i to vertex y in layer i + 1. Now, for every layer 0 i < n, we can introduce two stochastic matrices M 0i and M 1i (we interchangeably call them walk matrices as well) which are defined as { 1 if (x, i, b) (y, i + 1) M bi (y, x) = 0 otherwise Now, assume that we start with a probability distribution x R w over the states in the 0 th layer and then the string chosen is y, then the probability distribution on the states in the final layer is given by n 1 i=0 M y i i x. Since any string is chosen with probability 1/2 n, the final distribution is given by x {0,1} n n 1 i=0 M x i i x 2 n = ( n 1 i=0 ( ) ) M0i + M 1i x If instead, the y s are drawn from a distribution D, then the distribution on the final layer will be n 1 x y {0,1} n D(y) i=0 M yi i 2 3

5 Thus, our aim is to find a distribution D which can be sampled with a few bits of randomness and ( n 1 n 1 ( ) ) M yi i M0i + M 1i 2 ɛ y {0,1} n D(y) i=0 In the above, we do not specify the norm we use but actually any norm works for us (like the Frobenius norm). This is because for a constant sized matrix, all these norms are within a constant factor of each other. We now define and understand the concept of expander-product of distribution of matrices. i=0 2.3 Comparison of the product and the expander product of matrices We first recall that the notion of operator norm of a matrix M C n n denoted by M 2 is defined as x M M 2 = sup x C n x An important property of the operator norm is that it is submultiplicative i.e., X Y 2 X 2 Y 2. Another important property, which shall be useful to us is the following fact. Fact 2.3 For any stochastic matrix M R n n and M 2 1. Proof: Note that because M is a stochastic matrix, hence, for every i, j [n], M ij 0. Further, for every j [n], i M ij = 1. Now, note that (x M) j = i x i M ij. Hence, we have x M 2 2 = x i M ij i [n] j [n] 2 j [n] i [n] M ij x 2 i = i [n] The inequality in the above is an application of Cauchy-Schwarz inequality. x 2 i = 1 Let us assume that Γ 1, Γ 2 : {0, 1} r {0, 1} n and ρ 1, ρ 2 : {0, 1} n C m m. Assume that ρ 1 (x) 2, ρ 2 (x) 2 1 for all x {0, 1} n (this will be the case throughout this paper). Consider the following two sums; A = Then the product of A and B is given by x {0,1} r 1 2 r ρ 1(Γ 1 (x)) B = x {0,1} r 1 2 r ρ 2(Γ 2 (x)) (1) x,y {0,1} r 1 2 2r ρ 1(Γ 1 (x)) ρ 2 (Γ 2 (y)) = A B We now consider a 2 d regular graph H on {0, 1} r with second eigenvalue bounded by λ. We define the expander product as A H B = x {0,1} r,(x,y) E(H) 1 2 r+d ρ 1(Γ 1 (x)) ρ 2 (Γ 2 (y)) 4

6 We note that without specifying the functions ρ i, Γ i, it is not possible to concretely define A H B. However, specifying all the parameters, makes the definitions and applications cumbersome. So, we sacrifice some accuracy for the sake of clarity. The ρ i s and Γ i s will be clear from the context. The relation between the above definitions and the INW generator and branching programs is obvious. Let us consider a branching program of length 2 m+1. Now, let Γ m : {0, 1} t {0, 1} 2m be the instantiation of the INW generator which stretches t bits to 2 m bits. Let us define ρ 1 (x) = i<2 m M x i i and ρ 2 (x) = i<2 m M x i (i+2 m ). Hence, A and B correspond to the walk matrices for the first and the second half of the branching program provided the input to both the halves is sampled from the output of Γ m. Now, if we use independent seeds for the first and the second applications of Γ m, then the transition matrix for the entire branching program is A B. On the other hand, we can have another application of the INW generator and hence define Γ m+1 : {0, 1} t+d {0, 1} 2m+1 as Γ m+1 (x, j) = Γ m (x) Γ m (H(x, j)) where H(x, j) denotes the j th neighbor of x in H. In this case, it is easy to see that the transition matrix corresponding to the entire branching program when the input is sampled from the output of Γ m+1 is A H B. Lemma 2.4 Let ρ 1, ρ 2 : {0, 1} m C w w such that x {0, 1} m, j {1, 2}, ρ j (x) 2 1. Let Γ 1, Γ 2 : {0, 1} t {0, 1} m. Let H be a 2 d -regular graph on {0, 1} t with second eigenvalue λ. Then, for A and B as defined in (1), A B A H B 2 λ Also, we have the following observation : If for all x, ρ 1 (x) and ρ 2 (x) are identity on subspace W, then both A B and A H B are also identity on subspace W. By a matrix M being identity on subspace W, we mean M : W W and for every x W, x M = M x = x (W is the orthogonal complement of W ). Proof: Let us define X, Y C w w2t as follows. Both X and Y are divided into 2 t blocks such that the i th block in X is ρ 1 (Γ 1 (i)) and the i th block in Y is ρ 2 (Γ 2 (i)). Also, let us define matrices Λ 1, Λ 2 C w2t w2 t as follows. Λ 1 = Λ H Id and Λ 2 = Λ K Id where Id is the w w identity matrix, Λ H is the random walk matrix for graph H and Λ K is the random walk matrix for a clique on 2 t vertices. Here denotes the tensor product of matrices. Then, A B A H B = X (Λ 1 Λ 2 ) Y t However, we also note that by definition of second eigenvalue of Λ H (and that the graph H is regular), we derive that the largest singular value of C = Λ H Λ K is bounded by λ. Since, the largest singular value of tensor product of two matrices is given by the product of the largest singular values of the two matrices, we get that the largest singular value of Λ 1 Λ 2 is at most λ. This implies that Λ 1 Λ 2 2 λ. Now, observe that X (Λ 1 Λ 2 ) Y t 2 t = X 2 t (C Id) Y t 2 t Also, as each block of X is a matrix whose norm is at most 1, hence X/ 2 t 2 1. Similarly, Y t / 2 t 2 1. Putting everything together, we prove the claim. The observation trivially follows from the assumptions about H and ρ i s. 2 t 5

7 2.4 Basic error analysis of the INW generator We would now like to put an upper bound on the probability with which the branching program can distinguish between the output of the INW generator and the uniform distribution. Equivalently, let M be the average walk matrix between the 0 th and the n th layer when the input is uniformly random. Let M be the average walk matrix when the input is chosen from the output of the INW generator. We would like to put an upper bound on M M 2. In order to do this, we will consider two trees. Both of these will be full binary trees. The leaf nodes are numbered 1 to n from left to right. The first tree represents the average walk matrix (at various points in the branching program) when the input is sampled from the output of the INW generator. The second tree represents the average walk matrix (at various points in the branching program) when the input is uniform. We call the first tree the pseudo tree and the second one the true tree. Without loss of generality, we consider n to be a power of 2. Consider any node x (in any of the trees) at height m. Assume that the leaves in the subtree rooted at x are numbered {i,..., j = i + 2 m 1}. Also, let us define Γ m : {0, 1} t {0, 1} 2m to be the INW generator which stretches t bits into 2 m bits. Let Γ m : {0, 1} 2m {0, 1} 2m be the identity function. 2. Further, we define ρ x (w) = i t j M w tt. The labeling L(x) is then as follows { w {0,1} 1 L(x) = 2m 2 ρ x(γ m(w)) if x is in the true tree 2m w {0,1} 1 t 2 ρ t x (Γ m (w)) if x is in the pseudo tree Thus, with the above labeling, L(x) is simply the average walk matrix to go from layer i to layer i + 2 m when the input is sampled from the uniform distribution (in case of the true tree) or the output of the INW generator (in case of the pseudo tree). We adopt the following convention. Whenever, we talk about a node x in the tree, it refers to the corresponding nodes in both the true and the pseudo tree. To refer to the corresponding node in the true tree, we refer to it as x t and for the pseudo tree, we refer to it as x p. We now observe the following : Let x be a node and y and z be its left and right children. L(x t ) = L(y t ) L(z t ) L(x p ) = L(y p ) H L(z p ) Claim 2.5 Let x be a node at height t. Then L(x t ) L(x p ) t λ. Proof: Clearly, the claim holds when t = 0. We assume it holds for t t 0 and prove it for t = t Let x be at height t and its children be y and z at height t 0. L(x p ) L(x t ) 2 = L(x p ) L(y p )L(z p ) + L(y p )L(z p ) L(y t )L(z t ) 2 L(x p ) L(y p )L(z p ) 2 + L(y p )L(z p ) L(y t )L(z t ) 2 L(y p ) H L(z p ) L(y p )L(z p ) 2 + L(y p ) 2 L(z p ) L(z t ) 2 + L(z t ) 2 L(y p ) L(y t ) 2 λ t 0 λ < 2 2 t 0+1 λ 2 For a bit string w of length n and position t [n], w t represents the t th bit of w 6

8 In the above analysis, we use Fact 2.3. The above claim clearly shows that to have a total error of ɛ, it suffices to have λ = 2ɛ/n which means that the INW generator has seed of length O(log n log(n/ɛ)). The more important aspect of the above analysis is that while we pessimistically assume that L(y p ) 2, L(z t ) 2 are as large as 1, but in general it can be much smaller. In particular, if they are both bounded by say 1/3, then the error will not increase with height. Of course, in general, this is not true and it can be the case that L(y p ) 2 = L(z t ) 2 = 1 but then we show that in fact, there is no error incurred when one does expander product versus true product! 3 It is interesting to note that the results in [BRRY10, BV10, KNP10] as well as our result beats the above naive analysis for regular (or permutation) branching programs by a clever analysis of the INW generator. In contrast, results in [Lu02, LRTV09, GMRZ10] which are directed towards symmetric functions use a combination of hash functions and INW generator. It seems hard to use hash functions for general branching programs because the main purpose of hashing in these constructions is to rearrange weights so that they are evenly spread out. However, unless one is guaranteed that the function being computed by the branching program is invariant under permutations, it seems impossible to use hash functions. 2.5 Organization of the paper Section 3 considers the problem of fooling group products over an abelian group. While technically much simpler than the subsequent section on fooling permutation branching programs, the analysis gives the intuition on how to improve the analysis of the INW generator for general permutation branching programs. Also, the seed length we achieve for the problem is incomparable to the previous best known seed length for the same problem. Section 4 considers the problem of fooling group products over small biased groups. We improve on the previous best result for this problem [MZ09]. Section 5 presents a PRG with seed length O(log n (w 8 + log(1/ɛ)) for permutation branching programs of width w and length n. Section 6 presents a PRG with seed length O(log n (w log log n + log(1/ɛ)) for regular branching programs of width w and length n. We would like to highlight that while the results in the following section about fooling abelian group products are particularly easy to prove, they are nevertheless important for two reasons. One is that they highlight some of the important ideas which will later be used to analyze general permutation branching programs. The second important thing is that the complexity of the analysis in [KNP10] seems to stem from the fact (as they themselves remark) that a group may possess non-trivial subgroups. Our analysis shows that in fact most of the complexity in analyzing general permutation branching programs comes from the non-commutativity of the group rather than existence of proper subgroups. 3 Fooling abelian group products using the INW generator Assume that we have been given an abelian group G and g 0,..., g n 1 G. Further, for a, b G, a b represents the group operation applied on a and b. Let us also define { g x 1 if x = 0 = g if x = 1 Consider the distribution over group G obtained by sampling x 0,..., x n 1 uniformly at random and considering the product g x 0 0 gx gx n 1 n 1. The following is the main theorem of this section. 3 This is not exactly true but we make it precise later 7

9 Theorem 3.1 Let Γ : {0, 1} t {0, 1} n be the INW generator with λ = ɛ/ G 7. distribution D = {g x 0 0 gx gx n 1 n 1 : (x 0,..., x n 1 ) Γ(U t )} D = {g x 0 0 gx gx n 1 n 1 : (x 0,..., x n 1 ) U n } Then D and D are ɛ close in statistical distance. Consider the As the seed length required for the INW generator is O(log n log(1/λ)), we get the following corollary. Corollary 3.2 There exists a polynomial time computable function Γ : {0, 1} t {0, 1} n where t = O(log n (log m + log(1/ɛ)) such that for every abelian group G of size m and g 1,..., g n G, the distributions D and D defined as are ɛ-close in statistical distance. D = {g x 0 0 gx gx n 1 n 1 : (x 0,..., x n 1 ) Γ(U t )} D = {g x 0 0 gx gx n 1 n 1 : (x 0,..., x n 1 ) U n } In order to prove theorem 3.1, we first need to go over some basic Fourier analysis. Definition 3.3 A character χ : G C is a group homomorphism i.e., for x, y G, χ(x y) = χ(x)χ(y). Any abelian group G has G distinct characters including the trivial character which maps every element to 1. Definition 3.4 For a distribution D : G [0, 1] and a character χ : G C, we define ˆD(χ) = x G χ(x)d(x). Note that this differs from the standard definition of fourier coefficient of a function by a normalization factor. For any element g G, consider the matrix R g which is defined as follows : { 1 if xy 1 = g R g (x, y) = 0 otherwise First of all, we observe that all the matrices of the form R g commute with each other. This is because the underlying group is a commutative group. This implies that they are simultaneously diagonalizable in some basis. In fact, R g = Γ ρ(g) Γ 1 where Γ is a unitary matrix and ρ(g) is a diagonal matrix. R g = Γ diag[χ 1 (g),..., χ G (g)] Γ 1 where χ 1,..., χ n are the different characters Now, note that we can define the group products problem as a branching program. More precisely, we have G states at every level corresponding to each of the group elements. From level i to level i + 1, if the input is 0, then g, g g. If the input is 1, then g, g gg i. Therefore, the walk matrices are M 0i = Id (Id is the G G identity matrix) and M 1i = R(g i ). We now make the following observation. Observation 3.5 Let x p be the root node of the pseudo tree and x t be the root node of the true tree with the parameters of the pseudo tree same as in Theorem 3.1. Also, the walk matrices at the i th step are M 0i and M 1i as defined above. Let L(x p ) and L(x t ) be the labels of x p and x t respectively. If L(x p ) L(x t ) 2 ɛ/ G, then Theorem 3.1 follows. 8

10 Proof: Note that distribution obtained in case of the pseudo tree is given by e 1 L(x p ) where e 1 is the standard unit vector with 1 at the position of the group identity and 0 everywhere else. Similarly, the distribution in case of the true tree is e 1 L(x t ). Note that the statistical distance between the two distributions is given by e 1 (L(x p ) L(x t )) 1 G e 1 (L(x p ) L(x t )) 2 G (L(x p ) L(x t )) 2 which proves our claim. In order to prove that L(x p ) L(x t ) 2 is small, we make the following very important observation. Observation 3.6 Corresponding to the walk matrix M bi, let us assume the diagonalized matrix is ρ bi. Since, all the matrices are simultaneously diagonalizable, we can assume that walk matrices at the leaf nodes of the pseudo and the true trees are ρ bi instead of M bi. The labeling for the non-leaf nodes is generated in the same way as before. More precisely, in the true tree, the label of a non-leaf node is the product of the labels of its two children while in the pseudo tree, the label of a non-leaf node is the expander product of its two children (the expander being the underlying expander of the INW generator). Also, since all the matrices in the leaf nodes are diagonals and hence each of the intermediate products are also diagonal (in both the trees), therefore to bound L(x p ) L(x t ) 2, it suffices to put an upper bound on every diagonal entry of L(x p ) L(x t ). From the above observation, it suffices to show that for any i G, L(x p )[i] L(x t )[i] ɛ/ G. Here, for a matrix A, A[i] represents its i th diagonal entry. Lets fix a particular i [ G ]. Claim 3.7 Consider any node x in the true tree. Let L(x) be its labeling. Consider the i th diagonal entry of L(x). Then either the entry is 1 or it is at most 1 1/ G 2 in magnitude. Further, it is 1 if and only if the corresponding diagonal entry is 1 for each of the walk matrices (now they are diagonalized) in all the leaf nodes. Proof: We first prove the claim for the leaf nodes. Note that the diagonal entries correspond to the characters. So, if the i th diagonal entry corresponds to character χ i. Hence the i th diagonal entry of the t th leaf is labeled by 1/2(χ i (e) + χ i (g t )) = 1/2(1 + χ i (g t )). However, note that because character is a homomorphism, hence χ i (g t ) G = 1. Hence, χ i (g t ) = e 2πia G where 0 < a < G. 1 + ω 2 = cos ( ) πia G ( πi cos G ) 1 1 G 2 This gives us the result for leaf nodes. Now, assume the hypothesis to be true for nodes at a height h < t 0. Consider a node x t at height t 0. For non-leaf nodes, we observe that if x t is a node and y t and z t are its children, then the i th diagonal entry of x t is the product of the corresponding entries in y t and z t. By induction hypothesis, unless both y t and z t are 1 in magnitude, at least one of them is going to be at most 1 1/ G 2 in magnitude. Hence, so is x t. Also, if both y t and z t are 1, so is z t but by induction hypothesis we get that all the leaves in the tree rooted at y t and z t have walk matrices whose i th diagonal entry is 1. This proves the claim for x t as well. For the rest of the discussion, we fix the i and let L (x) denote L(x)[i] for any node x. The next claim shows that for any node x t in the true tree, if the i th diagonal entry is at least 1/10 in magnitude, 9

11 then the corresponding entry in x p is within an error of λ G 4 from it. All the calculations in this section use that λ G 6 < 1/10 (eventually we set λ = ɛ/ G 7 ). More precisely, we have the following claim. Claim 3.8 Let x t be a node in true tree such that its labeling L (x t ) = l i satisfies l i 1/10. Then, L (x t ) L (x p ) λ G 4 log(1/ l i ). Proof: We prove it by induction on the height of the node x t. Note that it is trivially true for the leaves as the marginal of the INW generator on any coordinate is uniformly random. Let us assume it is true for nodes at height < h. Let x be a node at height h. We break it into two situations. Let the children of x be y and z. Now, assume that at least one of L (y t ) or L (z t ) is 1. Let us assume without loss of generality that it is the former case. Then L (x t ) = L (z t ). Also, we note that L (x p ) = L (z p ) (We do not incur an error λ because of Lemma 2.4.) We note that by induction hypothesis, we have L (z p ) L (z t ) λ G 4 log(1/ L(z t ) ). However, L (x p ) L (x t ) = L(z p ) L(z t ) λ G 4 log(1/ L (z t ) ) = λ G 4 log(1/ l i ) Next, we consider the case when both L (y t ) 1 1/ G 2 and L (z t ) 1 1/ G 2. Let L (y p ) = L (y t ) + ɛ y L (z p ) = L (z t ) + ɛ z Let us assume without loss of generality that L (y t ) L (z t ). Then, L (x p ) L (x t ) = ɛ y L (z t ) + ɛ z (L (y t ) + ɛ y ) + ɛ ɛ λ ɛ z + ɛ y L (z t ) + λ λ G 4 log(1/ L (z t ) ) + ( 1 1 ) G 2 λ G 4 log(1/ L (y t ) ) + λ λ G 4 log(1/ L (z t ) L (y t ) ) = λ G 4 log(1/ L (x t ) ) The last inequality uses the fact that L (y t ) 1 1 G 2. We next show that for any node x t in the true tree, if the i th diagonal entry is at most 1/10 in magnitude, then the corresponding entry in x p is within an error of λ G 6 from it. Claim 3.9 Let x t be a node in true tree such that L (x t ) = l i satisfies l i < 1/10. Then, L (x t ) L (x p ) λ G 6. Proof: Let the children of x be y and z. Let us assume by induction hypothesis that the claim holds for all nodes below x. We consider the following four cases : At least one of L (y t ) = 1 or L (z t ) = 1. Assume L (y t ) = 1. Both L (y t ) 1/10 and L (z t ) 1/10 Both L (y t ) < 1/10 and L (z t ) < 1/10 Exactly one of L (y t ) and L (z t ) is less than 1/10. 10

12 In the first case, note that by induction hypothesis, the claim holds for y and z. Now, as one of the entries is 1, hence by Lemma 2.4, we see that L (x p ) = L (y p ) H L (z p ) = L (y p ) L (z p ) = L (z p ) As L (z p ) L (z t ) λ G 6, L (x p ) L (x t ) λ G 6. In the second case, by a basic union bound, we get that the error is at most 2 log 10 λ G 4 + λ λ G 6. For the next two cases, let us write L (z p ) = L (z t ) + ɛ z and L (y p ) = L (y t ) + ɛ y. For the third case, ɛ y, ɛ z λ G 6. L (x p ) L (x t ) = ɛ y L (z t ) + ɛ z (L (y t ) + ɛ y ) + ɛ ɛ λ ɛ y L (z t ) + ɛ z L (y t ) + ɛ z ɛ y + λ λ G λ G λ 2 G 12 + λ λ G 6 For the last part, assume that L (y t ) 1/10 and L (z t ) 1/10. Hence, for this part, we have that ɛ y λ G 6 and ɛ z (log 10) λ G 4. Again, we have that, L (x p ) L (x t ) ɛ y L (z t ) + ɛ z L (y t ) + ɛ z ɛ y + λ ( 1 1 ) G 2 λ G λ G 4 log 10 + λ 2 G 10 + λ λ G 6 Combining Claims 3.8 and 3.9, we get the following lemma which combined with Observation 3.6 implies Theorem 3.1. Lemma 3.10 Let x t be a node in true tree and x p be the corresponding node in the pseudo tree. Then L (x t ) L (x p ) λ G 6. This implies that L(x t ) L(x p ) 2 λ G 6 ɛ/ G for λ = ɛ/ G 7. 4 Small biased spaces for group products Next, we introduce the problem of small biased spaces for group products. Let G be an arbitrary group and let x 1,..., x n {0, 1}. Again, for a, b G, we let a b denote the group operation applied on a and b. We also remind ourselves that for g G and x {0, 1}, { g x 1 if x = 0 = g if x = 1 Consider the distribution D = g x gxn n where g 1,..., g n G are chosen independently and uniformly at random. We seek to come up with an efficiently computable function Γ : {0, 1} t G n such that when g 1,..., g n is sampled from Γ(U t ) then D = g x gxn n 11

13 is ɛ-close to D in statistical distance. The aim is to keep t as small as possible and a probabilistic argument shows that it is possible to get t = O(log G + log n + log(1/ɛ)). The task of getting an explicit function Γ was first considered by Meka and Zuckerman [MZ09]. They obtained the following result. Theorem 4.1 There exists some fixed constant c = c(g) < 1/ G and Γ : {0, 1} t G n with t = O(log n) such that if X is a distribution defined as the output of Γ(U t ) such that for any h G P [g x gxn n = h] P [g x 1 (g 1,...,g n) Γ(U t) (g 1,...,g n) G n 1... gxn n = h] c Also one of their theorems coupled with the pseudorandom generator from [KNP10], gives the following result. For every ɛ > 0, there exists Γ : {0, 1} t G n with t = O(log n (log(1/ɛ) + G Θ(1) )) such that if X is a distribution defined as the output of Γ(U t ), then P [g x gxn n = h] P [g x 1 (g 1,...,g n) Γ(U t) (g 1,...,g n) G n 1... gxn n = h] ɛ We improve on their result in terms of dependency on the seed length on the size of the group. Also, as we shall see, to get this improvement, we do not require the whole machinery of [KNP10] and our proof is rather short and simple. In particular, we prove the following theorem. Theorem 4.2 Let Γ : {0, 1} t G n denote the INW generator with λ = ɛ/ G. If X denotes the output distribution of Γ(U t ), P [g x gxn n = h] P [g x 1 (g 1,...,g n) X (g 1,...,g n) G n 1... gxn n = h] ɛ As a corollary, using the INW generator describe in Remark 2.2, we get that there is a polynomial time computable function Γ : {0, 1} t G n with t = O(log n (log G + log(1/ɛ)) such that if X denotes the output distribution of Γ(U t ), P [g x gxn n = h] P [g x 1 (g 1,...,g n) X (g 1,...,g n) G n 1... gxn n = h] ɛ Before we discuss the proof of the above theorem, we will need to review some basic representation theory. While it is possible to talk about the entire proof without using the language of representations, we believe its the right way to look at the proof and might be useful in getting improvements in the future. Also, it will be helpful for us in Section 5. An excellent source for reviewing the required material are lecture notes by Telerman [Tel05]. Below we review some basic representation theory which will be helpful. For the reader familiar with representation theory, we remark that our definitions are sometimes restrictive because the most general definition is not always helpful for us. Definition 4.3 Let V be any vector space over C. Then GL(V ) is the group of all invertible linear transformations from V to V with the group operation being the function composition. Definition 4.4 For a group G and a vector space V, a map ρ : G GL(V ) is said to be a representation of G if ρ is a group homomorphism. In this paper, we only consider vector spaces V over C. 12

14 Definition 4.5 A group representation ρ : G GL(V ) is said to be irreducible if it does not have an invariant subspace. In other words, if for W V, g, ρ(g)w W, then W = {0}. Irreducible representations are fundamental building blocks in the sense that any representation of the group G can be seen as a direct product of irreducible representations of G. Moreover, any finite group G, has only finitely many irreducible representations (up to isomorphism). We say two representations ρ 1 and ρ 2 are isomorphic if there exists an invertible matrix τ such that for every g, τ ρ 1 (g) τ 1 = ρ 2 (g). Theorem 4.6 (Maschke) Let V be any finite dimensional vector space over C, G be a finite group and ρ : G GL(V ) be a representation. Then ρ can be written as a direct product of the irreducible representations of group G. We list the following important properties of irreducible representations (some of which will be helpful later). Lemma 4.7 Let S = {ρ 1, ρ 2,...} be the set of irreducible representations of a finite group G. Let d i be the dimension of ρ i i.e. ρ i : G C d i d i. Then, i S d2 i = G. We next state Schur s lemma. Lemma 4.8 Let ρ and τ be two non-isomorphic irreducible representations of a group G. Then, τ i,j, ρ k,l = E[τ i,j (g)ρ k,l (g)] = 0 τ i,j, τ k,l = δ i,kδ j,l d τ where d τ is the dimension of τ. We remark that every group has a trivial irreducible representation ρ t : G GL(C) such that x, ρ(x) 1. We now state the following simple corollary of Schur s lemma and the earlier observation. Corollary 4.9 Let ρ : G GL(V ) be a non-trivial irreducible representation of G. Then x G ρ(x) = 0. Proof: By Schur s lemma, we get that if τ is the trivial representation, then for any i, j, τ i,j, ρ k,l = 0 which implies the claim. We now return to the problem of constructing small bias spaces over group products. We use the INW generator described in Remark 2.2. We now state the analogue of lemma 2.4 in this setting. To, do this let us assume that Γ 0, Γ 1 : {0, 1} r G m and ρ : G m C w w and let us define A and B as following : A = 1 2 r ρ(γ 1 (x)) B = 1 2 r ρ(γ 2 (x)) (2) x {0,1} r x {0,1} r Lemma 4.10 Let ρ 1, ρ 2 : G m C w w such that x {0, 1} m, j {1, 2}, ρ j (x) 2 1. Let Γ 1, Γ 2 : {0, 1} t G m. Let H be a 2 d -regular graph on {0, 1} r with second eigenvalue λ. Then, for A and B as defined in (2), A B A H B 2 λ We also have the following observation. 13

15 If A and B are identity on some subspace W, then A B as well as A H B are identity on W. We recall that a matrix A is said to be identity on some subspace W if and only if for all x W, x A = A x = x. We now formulate the problem of constructing small biased spaces for group products in terms of getting pseudorandomness for a certain permutation branching program (for fooling which, we use the INW generator). The branching program has n + 1 layers. Each layer consists of G many vertices, each vertex labeled by an element of G. The branching program starts at the identity element of G in the 0 th layer. Now, if x i = 1, the branching program moves from x in the (i 1) th layer to x g i in the i th layer. On the other hand, if x i = 0, the branching program moves from x in the (i 1) th layer to x in the i th layer. Thus, we have G walk matrices for the transition from the (i 1) th layer to the i th layer. We call them M hi for every h G. Further, if x i = 0, they are all the identity matrix. In case, x i = 1, then M hi is defined by { 1 x 1 y = h M hi (x, y) = 0 otherwise We observe that if we take a random walk starting at the vertex corresponding to the identity element of group G in the zeroth layer and then choose a random walk matrix among M hi to go from layer i 1 to layer i, then after j steps, the distribution on the j th layer is exactly the distribution g x gx j j where g i s are chosen uniformly at random. Hence, it suffices to analyze the error for this branching program when g i s are chosen from the output of the INW generator in order to prove Theorem 4.2. We next make the observation that the walk matrices can be block diagonalized such that the blocks have some nice properties. Observation 4.11 If x i = 1, the map h M hi is a group representation. This implies that there is a basis transformation in which all the walk matrices can be simultaneously block diagonalized. Note that this is because if x i = 0, then all the walk matrices are identity and hence after any change of basis transformation, it will remain identity in every block. If x i = 1, then each of the blocks in the walk matrices M hi correspond to some irreducible representation. The corresponding blocks in M hi when x i = 0 is always the identity. In particular, the following is true. If the block corresponds to the trivial representation, then the corresponding block is the 1 1 identity matrix in all the walk matrices. If the block corresponds to a non-trivial representation, then the following is true. If x i = 0, then all the walk matrices are identity in that block. If x i = 1, then the sum of the walk matrices in that block is identically zero. Also, all the blocks are unitary matrices because they correspond to irreducible representations. The next observation says that since all the walk matrices can be simultaneously diagonalized, we might as well treat the blocks individually and analyze the error incurred by using the INW generator vis-a-vis the uniform distribution in each of the blocks. Observation 4.12 Since all the walk matrices are simultaneously diagonalizable, we can assume that the leaves of the pseudo tree for the INW generator as well as the true tree are marked by these block diagonalized matrices as opposed to the true walk matrices. 14

16 Also, consider a particular block corresponding to the representation ρ. Let us instead label the leaf nodes (in both the trees) by identity matrix of dimension d ρ if x i = 0 and ρ h for all h G if x i = 1. The labels for the non-leaf nodes in the true and the pseudo tree are generated by taking the true and the expander products of the children respectively. For a node x, we describe this labeling as L (x). If we prove that for any node x p in the pseudo tree and node x t in the true tree and any representation ρ, the labeling L (x p ) and L (x t ) satisfy L (x p ) L (x t ) 2 ɛ/ G, then it implies that the INW generator fools the branching program with error ɛ. So, we now treat each block individually. Let us fix a representation ρ and analyze the difference between labellings in the true tree an the pseudo tree. If the representation corresponding to the block is trivial, then the following claim says that there is no error between the pseudo tree and true tree. Claim 4.13 Let x p and x t be corresponding nodes in the pseudo tree and the true tree respectively. Let L (x p ) and L (x t ) be the labeling of x p and x t with respect to the trivial representation. Then, L (x p ) = L (x t ) = 1 Proof: Note that if we consider the trivial representation, then all the leaf nodes in both the true and the pseudo tree are labelled by matrices which are just the 1 1 identity matrix. Now, clearly the labeling of the leaf nodes in both the pseudo and true tree is identical namely 1. We now use induction on the height of a node. Let x p be a node at height t in the pseudo tree and x t be the corresponding node in the true tree. By induction hypothesis, the labeling of the children of x t namely y t and z t is 1. Similarly, the labeling of y p and z p is also 1. Now as L (x t ) = L (y t )L (z t ) = 1. Further, note that L (x p ) = L (y p ) = 1 by Lemma 4.10 as the labeling is 1 on all the leaf nodes under z p. This proves the lemma. Next, we consider the case when the representation ρ is non-trivial. Claim 4.14 Let x p and x t be corresponding nodes in the pseudo tree and the true tree respectively. Let L (x p ) and L (x t ) be the labeling of x p and x t corresponding to the representation ρ. Then, L (x p ) L (x t ) 2 2λ. Proof: We prove this by induction. We observe that the claim holds for the leaf nodes. We first observe that for any node x t in the true tree, L (x t ) is either the identity matrix or L (x t ) = 0. This is because if x i = 0, then the i th leaf node is labeled by the identity matrix. On the other hand, if x i = 1, then the leaf node is labeled by 0 (as h G ρ(h) = 0). Clearly, for a node x t, L(x t ) = 1 if and only if all the leaf nodes below it are marked 1, else it is labeled 0. Now, consider any node x t in the true tree and its corresponding node x p. Let the children of x t be y t and z t, such that one of them, lets say L (y t ) = Id. Then, by Lemma 4.10, L (x t ) = L (z t ) and L (x p ) = L (z p ). However, by induction on the height of the tree, we can assume that L (z p ) L (z t ) 2 2λ which implies that L (x p ) L (x t ) 2 2λ. So, we assume that both L(y t ) and L(z t ) are 0. In that case, by induction hypothesis, we can assume that L(y p ) 2 2λ. Similarly, L(z p ) 2 2λ. By Lemma 4.10, we can say that L(x p ) L(y p ) L(z p ) 2 λ. This implies that L(x p ) 2 λ + 4λ 2 2λ (provided λ < 1/10). The above two claims together with Observation 4.12 implies that it suffices to have λ = ɛ/2 G to get an overall error of ɛ and hence get theorem

17 5 Pseudorandomness for permutation branching programs In this section, we discuss the most important result of this paper. Namely, we get a PRG for read-once permutation branching programs of constant width with seed length O(log n log(1/ɛ)). More precisely, we have the following theorem. Theorem 5.1 Let Γ : {0, 1} t {0, 1} n be the INW generator with λ = ɛ 2 w8. Then the output of Γ(U t ) is ɛ-indistinguishable from U n for read-once width w permutation branching programs of length n. Using the standard INW generator, we get the following corollary. Corollary 5.2 There is a polynomial time computable function Γ : {0, 1} t {0, 1} n with t = O(log n (w 8 + log(1/ɛ)) such that Γ(U t ) is ɛ-indistinguishable from U n for read-once permutation branching programs of width w. We now describe the overall strategy for proving Theorem 5.1. As described in Section 2, we will label the leaf nodes in both the INW tree as well as the true tree with the walk matrices corresponding to the branching program. In particular, the label for the i th leaf node will simply be the average of the walk matrix for the transition corresponding to 0 and that corresponding to 1 (recall, we call them M 0i and M 1i ). Subsequently, the label of the non-leaf node is the product of the labels of its children in the true tree and the expander product in case of the pseudo tree. Much like the case for abelian groups, for a node x t in the true tree and the corresponding node x p in the pseudo tree, we would like to say that L(x p ) L(x t ) is a function of λ and L(x t ) alone. However, in the case of abelian groups, we had the luxury of diagonalizing and treating each coordinate individually. Instead, here we cannot do that. We instead try to adopt the following strategy. For discussing the intuition further, it will be helpful to introduce the following concepts. Definition 5.3 For a matrix M C w w, we say W is the fixed point subspace of M if W is a maximal subspace such that for all x W, x M = M x = x. In other words, W is the maximal subspace such that M is identity on that subspace. We note that for a given matrix M, the fixed point subspace is uniquely defined. Further, for the matrix M, we define its non-trivial subspace to be W. Definition 5.4 For a matrix A C m m and for its non-trivial subspace W C m, we define A w A W = max w W w Coming back to the structure of the proof, we show that for any node x t in true tree and the corresponding node x p, L(x p ) L(x t ) can be bounded as a function of the dimension of the non-trivial subspace of L(x t ) (call it W ) and L(x t ) W. In particular, we will allow the error to grow as the dimension of W increases or L(x t ) W decreases. In fact, the dependence of the error on the dimension of the non-trivial subspace i.e., W shall dominate the dependence of the error on the norm of the label on its non-trivial subspace. The proof shall proceed by induction on the height of the nodes. To convey the intuition, let us say that we will claim that for any pair of corresponding nodes x t and x p, L(x p ) L(x t ) f(α)g(β)λ 16

18 where if W is the non-trivial subspace of L(x t ), then α = L(x t ) W and β = dim(w ). Further, let our choice be such that λ f(α) g(β) = 0 lim λ 0 This ensures that we can indeed choose λ such that if we just want constant error, then we need to choose some constant λ dependent on w. Now, assume that this holds by induction hypothesis till some height. Consider the inductive step. Let x t be a node in the true tree and y t and z t be its children. Let x p, y p and z p be the corresponding nodes in the pseudo tree. There are exactly three situations which can arise : The non-trivial subspace of y t and z t and hence x t are the same. In this case, the allowed dependence of the error on β plays no role. The only relevant factor is α and the analysis is similar to the analysis in case of abelian groups. The non-trivial subspace of y t and z t are such that neither is contained within the other. In this case, the non-trivial subspace of x t has strictly bigger dimension than those y t or z t. It is here that the dependence of the error on β plays a role. In fact, because we allow the dependence on β to supersede any dependence on α, we can bound the error easily. The only thing we need to show is that the norm of x t on its non-trivial subspace is not very close to 1 which we manage to show easily. The non-trivial subspace of y t is properly contained in that of z t (or vice-versa). In this situation, it is possible that labeling of x t has the same norm on its non-trivial subspace as z t, yet L(x p ) L(x t ) W > L(z p ) L(z t ) W where W is the non-trivial subspace of z t and x t. What is more concerning is that one can have a series of nodes (in the true tree), call them x 0,..., x m and y 1,..., y m such that x i has two children, x i 1 and y i. Also, for all i, the non-trivial subspace of L(y i ) is properly contained in the non-trivial subspace of L(x 0 ). The way around in this situation is to actually do a global analysis of the error incurred by the chain as a whole rather than trying to do it on a per node basis. Here we use an induction based proof of the Key Convegence Lemma in [KNP10]. We will now state the claims we prove and how the main theorem 5.1 follows from it. Below, when we say that an operator acts trivially on subspace X, we simply means that it is identity on the subspace X. The first claim we prove is the following. Claim 5.5 Let α 1/10. Also, let x t be a node in the true tree and x p be the corresponding node in the pseudo tree. Also, assume that y t and z t are the children of x t and y p and z p are the children of x p. Further, let the non-trivial subspace of x t, y t and z t be W with dimension β. Let L(y t ) W = α, L(z t ) W = α and L(x t ) W = α. Then, provided L(y p ) L(y t ) W λ w w5 w w5β log(1/α) + λw w5β 2 β 6 w and L(z p ) L(z t ) W λ w w5 w w5β log(1/α ) + λw w5β 2 β 6 w. Then, L(x p ) L(x t ) W λ w w5 w w5β log(1/α ). If L(z p ) and L(y p ) act trivially on W, so does L(x p ). Claim 5.6 Let x t be a node in the true tree and x p be the corresponding node in the pseudo tree. Also, assume that y t and z t are the children of x t and y p and z p are the children of x p. Further, let the non-trivial subspace of x t, y t and z t be W such that dim(w ) = β. Let L(x t ) W < 1/10. Assume that for j {y, z}, the following holds { λ w w5 w w5β log(1/ L(j t ) W ) + λw w5β 2 β 6 w if L(j t ) W 1/10 L(j p ) L(j t ) W λ w w5 w w5β w 6w + λw w5β 2 β 6 w (3) otherwise 17

Pseudorandomness for permutation and regular branching programs

Pseudorandomness for permutation and regular branching programs Pseudorandomness for permutation and regular branching programs Anindya De Computer Science Division University of California, Berkeley Berkeley, CA 94720, USA anindya@cs.berkeley.edu Abstract In this

More information

PRGs for space-bounded computation: INW, Nisan

PRGs for space-bounded computation: INW, Nisan 0368-4283: Space-Bounded Computation 15/5/2018 Lecture 9 PRGs for space-bounded computation: INW, Nisan Amnon Ta-Shma and Dean Doron 1 PRGs Definition 1. Let C be a collection of functions C : Σ n {0,

More information

Pseudorandom Generators for Group Products

Pseudorandom Generators for Group Products Pseudorandom Generators for Group Products preliminary version Michal Koucký, Prajakta Nimbhorkar and Pavel Pudlák July 16, 2010 Abstract We prove that the pseudorandom generator introduced in [INW94]

More information

IITM-CS6845: Theory Toolkit February 3, 2012

IITM-CS6845: Theory Toolkit February 3, 2012 IITM-CS6845: Theory Toolkit February 3, 2012 Lecture 4 : Derandomizing the logspace algorithm for s-t connectivity Lecturer: N S Narayanaswamy Scribe: Mrinal Kumar Lecture Plan:In this lecture, we will

More information

Notes on Complexity Theory Last updated: December, Lecture 27

Notes on Complexity Theory Last updated: December, Lecture 27 Notes on Complexity Theory Last updated: December, 2011 Jonathan Katz Lecture 27 1 Space-Bounded Derandomization We now discuss derandomization of space-bounded algorithms. Here non-trivial results can

More information

Representation Theory. Ricky Roy Math 434 University of Puget Sound

Representation Theory. Ricky Roy Math 434 University of Puget Sound Representation Theory Ricky Roy Math 434 University of Puget Sound May 2, 2010 Introduction In our study of group theory, we set out to classify all distinct groups of a given order up to isomorphism.

More information

Pseudorandom Generators for Regular Branching Programs

Pseudorandom Generators for Regular Branching Programs Pseudorandom Generators for Regular Branching Programs Mark Braverman Anup Rao Ran Raz Amir Yehudayoff Abstract We give new pseudorandom generators for regular read-once branching programs of small width.

More information

On Pseudorandomness w.r.t Deterministic Observers

On Pseudorandomness w.r.t Deterministic Observers On Pseudorandomness w.r.t Deterministic Observers Oded Goldreich Department of Computer Science Weizmann Institute of Science Rehovot, Israel. oded@wisdom.weizmann.ac.il Avi Wigderson The Hebrew University,

More information

Math 594. Solutions 5

Math 594. Solutions 5 Math 594. Solutions 5 Book problems 6.1: 7. Prove that subgroups and quotient groups of nilpotent groups are nilpotent (your proof should work for infinite groups). Give an example of a group G which possesses

More information

Representations. 1 Basic definitions

Representations. 1 Basic definitions Representations 1 Basic definitions If V is a k-vector space, we denote by Aut V the group of k-linear isomorphisms F : V V and by End V the k-vector space of k-linear maps F : V V. Thus, if V = k n, then

More information

Pseudorandom Generators for Regular Branching Programs

Pseudorandom Generators for Regular Branching Programs Pseudorandom Generators for Regular Branching Programs Mark Braverman Anup Rao Ran Raz Amir Yehudayoff Abstract We give new pseudorandom generators for regular read-once branching programs of small width.

More information

Balls and Bins: Smaller Hash Families and Faster Evaluation

Balls and Bins: Smaller Hash Families and Faster Evaluation Balls and Bins: Smaller Hash Families and Faster Evaluation L. Elisa Celis Omer Reingold Gil Segev Udi Wieder April 22, 2011 Abstract A fundamental fact in the analysis of randomized algorithm is that

More information

On Recycling the Randomness of the States in Space Bounded Computation

On Recycling the Randomness of the States in Space Bounded Computation On Recycling the Randomness of the States in Space Bounded Computation Preliminary Version Ran Raz Omer Reingold Abstract Let M be a logarithmic space Turing machine (or a polynomial width branching program)

More information

: On the P vs. BPP problem. 18/12/16 Lecture 10

: On the P vs. BPP problem. 18/12/16 Lecture 10 03684155: On the P vs. BPP problem. 18/12/16 Lecture 10 Natural proofs Amnon Ta-Shma and Dean Doron 1 Natural proofs The ultimate goal we have is separating classes (or proving they are equal if they are).

More information

MAT 445/ INTRODUCTION TO REPRESENTATION THEORY

MAT 445/ INTRODUCTION TO REPRESENTATION THEORY MAT 445/1196 - INTRODUCTION TO REPRESENTATION THEORY CHAPTER 1 Representation Theory of Groups - Algebraic Foundations 1.1 Basic definitions, Schur s Lemma 1.2 Tensor products 1.3 Unitary representations

More information

Pseudorandom Generators

Pseudorandom Generators 8 Pseudorandom Generators Great Ideas in Theoretical Computer Science Saarland University, Summer 2014 andomness is one of the fundamental computational resources and appears everywhere. In computer science,

More information

Graph coloring, perfect graphs

Graph coloring, perfect graphs Lecture 5 (05.04.2013) Graph coloring, perfect graphs Scribe: Tomasz Kociumaka Lecturer: Marcin Pilipczuk 1 Introduction to graph coloring Definition 1. Let G be a simple undirected graph and k a positive

More information

Two Comments on Targeted Canonical Derandomizers

Two Comments on Targeted Canonical Derandomizers Two Comments on Targeted Canonical Derandomizers Oded Goldreich Department of Computer Science Weizmann Institute of Science Rehovot, Israel. oded.goldreich@weizmann.ac.il April 8, 2011 Abstract We revisit

More information

Theory of Computer Science to Msc Students, Spring Lecture 2

Theory of Computer Science to Msc Students, Spring Lecture 2 Theory of Computer Science to Msc Students, Spring 2007 Lecture 2 Lecturer: Dorit Aharonov Scribe: Bar Shalem and Amitai Gilad Revised: Shahar Dobzinski, March 2007 1 BPP and NP The theory of computer

More information

REPRESENTATION THEORY OF S n

REPRESENTATION THEORY OF S n REPRESENTATION THEORY OF S n EVAN JENKINS Abstract. These are notes from three lectures given in MATH 26700, Introduction to Representation Theory of Finite Groups, at the University of Chicago in November

More information

Balls and Bins: Smaller Hash Families and Faster Evaluation

Balls and Bins: Smaller Hash Families and Faster Evaluation Balls and Bins: Smaller Hash Families and Faster Evaluation L. Elisa Celis Omer Reingold Gil Segev Udi Wieder May 1, 2013 Abstract A fundamental fact in the analysis of randomized algorithms is that when

More information

Lecture 21: P vs BPP 2

Lecture 21: P vs BPP 2 Advanced Complexity Theory Spring 206 Prof. Dana Moshkovitz Lecture 2: P vs BPP 2 Overview In the previous lecture, we began our discussion of pseudorandomness. We presented the Blum- Micali definition

More information

Problem Set 2. Assigned: Mon. November. 23, 2015

Problem Set 2. Assigned: Mon. November. 23, 2015 Pseudorandomness Prof. Salil Vadhan Problem Set 2 Assigned: Mon. November. 23, 2015 Chi-Ning Chou Index Problem Progress 1 SchwartzZippel lemma 1/1 2 Robustness of the model 1/1 3 Zero error versus 1-sided

More information

REPRESENTATIONS AND CHARACTERS OF FINITE GROUPS

REPRESENTATIONS AND CHARACTERS OF FINITE GROUPS SUMMER PROJECT REPRESENTATIONS AND CHARACTERS OF FINITE GROUPS September 29, 2017 Miriam Norris School of Mathematics Contents 0.1 Introduction........................................ 2 0.2 Representations

More information

Spectral Graph Theory Lecture 2. The Laplacian. Daniel A. Spielman September 4, x T M x. ψ i = arg min

Spectral Graph Theory Lecture 2. The Laplacian. Daniel A. Spielman September 4, x T M x. ψ i = arg min Spectral Graph Theory Lecture 2 The Laplacian Daniel A. Spielman September 4, 2015 Disclaimer These notes are not necessarily an accurate representation of what happened in class. The notes written before

More information

Introduction to Group Theory

Introduction to Group Theory Chapter 10 Introduction to Group Theory Since symmetries described by groups play such an important role in modern physics, we will take a little time to introduce the basic structure (as seen by a physicist)

More information

CSC 2429 Approaches to the P vs. NP Question and Related Complexity Questions Lecture 2: Switching Lemma, AC 0 Circuit Lower Bounds

CSC 2429 Approaches to the P vs. NP Question and Related Complexity Questions Lecture 2: Switching Lemma, AC 0 Circuit Lower Bounds CSC 2429 Approaches to the P vs. NP Question and Related Complexity Questions Lecture 2: Switching Lemma, AC 0 Circuit Lower Bounds Lecturer: Toniann Pitassi Scribe: Robert Robere Winter 2014 1 Switching

More information

Lecture 3 Small bias with respect to linear tests

Lecture 3 Small bias with respect to linear tests 03683170: Expanders, Pseudorandomness and Derandomization 3/04/16 Lecture 3 Small bias with respect to linear tests Amnon Ta-Shma and Dean Doron 1 The Fourier expansion 1.1 Over general domains Let G be

More information

REPRESENTATION THEORY NOTES FOR MATH 4108 SPRING 2012

REPRESENTATION THEORY NOTES FOR MATH 4108 SPRING 2012 REPRESENTATION THEORY NOTES FOR MATH 4108 SPRING 2012 JOSEPHINE YU This note will cover introductory material on representation theory, mostly of finite groups. The main references are the books of Serre

More information

(d) Since we can think of isometries of a regular 2n-gon as invertible linear operators on R 2, we get a 2-dimensional representation of G for

(d) Since we can think of isometries of a regular 2n-gon as invertible linear operators on R 2, we get a 2-dimensional representation of G for Solutions to Homework #7 0. Prove that [S n, S n ] = A n for every n 2 (where A n is the alternating group). Solution: Since [f, g] = f 1 g 1 fg is an even permutation for all f, g S n and since A n is

More information

Pseudorandomness for Regular Branching Programs via Fourier Analysis

Pseudorandomness for Regular Branching Programs via Fourier Analysis Pseudorandomness for Regular Branching Programs via Fourier Analysis The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters Citation

More information

Lecture 7: ɛ-biased and almost k-wise independent spaces

Lecture 7: ɛ-biased and almost k-wise independent spaces Lecture 7: ɛ-biased and almost k-wise independent spaces Topics in Complexity Theory and Pseudorandomness (pring 203) Rutgers University wastik Kopparty cribes: Ben Lund, Tim Naumovitz Today we will see

More information

Tree sets. Reinhard Diestel

Tree sets. Reinhard Diestel 1 Tree sets Reinhard Diestel Abstract We study an abstract notion of tree structure which generalizes treedecompositions of graphs and matroids. Unlike tree-decompositions, which are too closely linked

More information

Lecture 3: Randomness in Computation

Lecture 3: Randomness in Computation Great Ideas in Theoretical Computer Science Summer 2013 Lecture 3: Randomness in Computation Lecturer: Kurt Mehlhorn & He Sun Randomness is one of basic resources and appears everywhere. In computer science,

More information

15-855: Intensive Intro to Complexity Theory Spring Lecture 16: Nisan s PRG for small space

15-855: Intensive Intro to Complexity Theory Spring Lecture 16: Nisan s PRG for small space 15-855: Intensive Intro to Complexity Theory Spring 2009 Lecture 16: Nisan s PRG for small space For the next few lectures we will study derandomization. In algorithms classes one often sees clever randomized

More information

Lecture 17. In this lecture, we will continue our discussion on randomization.

Lecture 17. In this lecture, we will continue our discussion on randomization. ITCS:CCT09 : Computational Complexity Theory May 11, 2009 Lecturer: Jayalal Sarma M.N. Lecture 17 Scribe: Hao Song In this lecture, we will continue our discussion on randomization. 1 BPP and the Polynomial

More information

Lecture 23: Alternation vs. Counting

Lecture 23: Alternation vs. Counting CS 710: Complexity Theory 4/13/010 Lecture 3: Alternation vs. Counting Instructor: Dieter van Melkebeek Scribe: Jeff Kinne & Mushfeq Khan We introduced counting complexity classes in the previous lecture

More information

Pseudorandomness for Permutation Branching Programs Without the Group Theory

Pseudorandomness for Permutation Branching Programs Without the Group Theory Electronic Colloquium on Computational Complexity, Report No. 83 (01) Pseudorandomness for Permutation Branching Programs Without the Group Theory Thomas Steinke Harvard School of Engineering and Applied

More information

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination Chapter 2 Solving Systems of Equations A large number of real life applications which are resolved through mathematical modeling will end up taking the form of the following very simple looking matrix

More information

Last time, we described a pseudorandom generator that stretched its truly random input by one. If f is ( 1 2

Last time, we described a pseudorandom generator that stretched its truly random input by one. If f is ( 1 2 CMPT 881: Pseudorandomness Prof. Valentine Kabanets Lecture 20: N W Pseudorandom Generator November 25, 2004 Scribe: Ladan A. Mahabadi 1 Introduction In this last lecture of the course, we ll discuss the

More information

Randomness and non-uniformity

Randomness and non-uniformity Randomness and non-uniformity JASS 2006 Course 1: Proofs and Computers Felix Weninger TU München April 2006 Outline Randomized computation 1 Randomized computation 2 Computation with advice Non-uniform

More information

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra D. R. Wilkins Contents 3 Topics in Commutative Algebra 2 3.1 Rings and Fields......................... 2 3.2 Ideals...............................

More information

Real representations

Real representations Real representations 1 Definition of a real representation Definition 1.1. Let V R be a finite dimensional real vector space. A real representation of a group G is a homomorphism ρ VR : G Aut V R, where

More information

Lecture 10: A (Brief) Introduction to Group Theory (See Chapter 3.13 in Boas, 3rd Edition)

Lecture 10: A (Brief) Introduction to Group Theory (See Chapter 3.13 in Boas, 3rd Edition) Lecture 0: A (Brief) Introduction to Group heory (See Chapter 3.3 in Boas, 3rd Edition) Having gained some new experience with matrices, which provide us with representations of groups, and because symmetries

More information

1 Adjacency matrix and eigenvalues

1 Adjacency matrix and eigenvalues CSC 5170: Theory of Computational Complexity Lecture 7 The Chinese University of Hong Kong 1 March 2010 Our objective of study today is the random walk algorithm for deciding if two vertices in an undirected

More information

The Gelfand-Tsetlin Basis (Too Many Direct Sums, and Also a Graph)

The Gelfand-Tsetlin Basis (Too Many Direct Sums, and Also a Graph) The Gelfand-Tsetlin Basis (Too Many Direct Sums, and Also a Graph) David Grabovsky June 13, 2018 Abstract The symmetric groups S n, consisting of all permutations on a set of n elements, naturally contain

More information

6.842 Randomness and Computation March 3, Lecture 8

6.842 Randomness and Computation March 3, Lecture 8 6.84 Randomness and Computation March 3, 04 Lecture 8 Lecturer: Ronitt Rubinfeld Scribe: Daniel Grier Useful Linear Algebra Let v = (v, v,..., v n ) be a non-zero n-dimensional row vector and P an n n

More information

Notes 6 : First and second moment methods

Notes 6 : First and second moment methods Notes 6 : First and second moment methods Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Roc, Sections 2.1-2.3]. Recall: THM 6.1 (Markov s inequality) Let X be a non-negative

More information

Connectedness. Proposition 2.2. The following are equivalent for a topological space (X, T ).

Connectedness. Proposition 2.2. The following are equivalent for a topological space (X, T ). Connectedness 1 Motivation Connectedness is the sort of topological property that students love. Its definition is intuitive and easy to understand, and it is a powerful tool in proofs of well-known results.

More information

is an isomorphism, and V = U W. Proof. Let u 1,..., u m be a basis of U, and add linearly independent

is an isomorphism, and V = U W. Proof. Let u 1,..., u m be a basis of U, and add linearly independent Lecture 4. G-Modules PCMI Summer 2015 Undergraduate Lectures on Flag Varieties Lecture 4. The categories of G-modules, mostly for finite groups, and a recipe for finding every irreducible G-module of a

More information

Complexity Theory VU , SS The Polynomial Hierarchy. Reinhard Pichler

Complexity Theory VU , SS The Polynomial Hierarchy. Reinhard Pichler Complexity Theory Complexity Theory VU 181.142, SS 2018 6. The Polynomial Hierarchy Reinhard Pichler Institut für Informationssysteme Arbeitsbereich DBAI Technische Universität Wien 15 May, 2018 Reinhard

More information

1 Introduction The relation between randomized computations with one-sided error and randomized computations with two-sided error is one of the most i

1 Introduction The relation between randomized computations with one-sided error and randomized computations with two-sided error is one of the most i Improved derandomization of BPP using a hitting set generator Oded Goldreich Department of Computer Science Weizmann Institute of Science Rehovot, Israel. oded@wisdom.weizmann.ac.il Avi Wigderson Institute

More information

Lecture 14 - P v.s. NP 1

Lecture 14 - P v.s. NP 1 CME 305: Discrete Mathematics and Algorithms Instructor: Professor Aaron Sidford (sidford@stanford.edu) February 27, 2018 Lecture 14 - P v.s. NP 1 In this lecture we start Unit 3 on NP-hardness and approximation

More information

Outline. Complexity Theory EXACT TSP. The Class DP. Definition. Problem EXACT TSP. Complexity of EXACT TSP. Proposition VU 181.

Outline. Complexity Theory EXACT TSP. The Class DP. Definition. Problem EXACT TSP. Complexity of EXACT TSP. Proposition VU 181. Complexity Theory Complexity Theory Outline Complexity Theory VU 181.142, SS 2018 6. The Polynomial Hierarchy Reinhard Pichler Institut für Informationssysteme Arbeitsbereich DBAI Technische Universität

More information

A note on monotone real circuits

A note on monotone real circuits A note on monotone real circuits Pavel Hrubeš and Pavel Pudlák March 14, 2017 Abstract We show that if a Boolean function f : {0, 1} n {0, 1} can be computed by a monotone real circuit of size s using

More information

CS Foundations of Communication Complexity

CS Foundations of Communication Complexity CS 49 - Foundations of Communication Complexity Lecturer: Toniann Pitassi 1 The Discrepancy Method Cont d In the previous lecture we ve outlined the discrepancy method, which is a method for getting lower

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Ma/CS 117c Handout # 5 P vs. NP

Ma/CS 117c Handout # 5 P vs. NP Ma/CS 117c Handout # 5 P vs. NP We consider the possible relationships among the classes P, NP, and co-np. First we consider properties of the class of NP-complete problems, as opposed to those which are

More information

Expander Construction in VNC 1

Expander Construction in VNC 1 Expander Construction in VNC 1 Sam Buss joint work with Valentine Kabanets, Antonina Kolokolova & Michal Koucký Prague Workshop on Bounded Arithmetic November 2-3, 2017 Talk outline I. Combinatorial construction

More information

k-protected VERTICES IN BINARY SEARCH TREES

k-protected VERTICES IN BINARY SEARCH TREES k-protected VERTICES IN BINARY SEARCH TREES MIKLÓS BÓNA Abstract. We show that for every k, the probability that a randomly selected vertex of a random binary search tree on n nodes is at distance k from

More information

Where do pseudo-random generators come from?

Where do pseudo-random generators come from? Computer Science 2426F Fall, 2018 St. George Campus University of Toronto Notes #6 (for Lecture 9) Where do pseudo-random generators come from? Later we will define One-way Functions: functions that are

More information

Lecture 7: Passive Learning

Lecture 7: Passive Learning CS 880: Advanced Complexity Theory 2/8/2008 Lecture 7: Passive Learning Instructor: Dieter van Melkebeek Scribe: Tom Watson In the previous lectures, we studied harmonic analysis as a tool for analyzing

More information

Multi-Linear Formulas for Permanent and Determinant are of Super-Polynomial Size

Multi-Linear Formulas for Permanent and Determinant are of Super-Polynomial Size Multi-Linear Formulas for Permanent and Determinant are of Super-Polynomial Size Ran Raz Weizmann Institute ranraz@wisdom.weizmann.ac.il Abstract An arithmetic formula is multi-linear if the polynomial

More information

Metric spaces and metrizability

Metric spaces and metrizability 1 Motivation Metric spaces and metrizability By this point in the course, this section should not need much in the way of motivation. From the very beginning, we have talked about R n usual and how relatively

More information

L(C G (x) 0 ) c g (x). Proof. Recall C G (x) = {g G xgx 1 = g} and c g (x) = {X g Ad xx = X}. In general, it is obvious that

L(C G (x) 0 ) c g (x). Proof. Recall C G (x) = {g G xgx 1 = g} and c g (x) = {X g Ad xx = X}. In general, it is obvious that ALGEBRAIC GROUPS 61 5. Root systems and semisimple Lie algebras 5.1. Characteristic 0 theory. Assume in this subsection that chark = 0. Let me recall a couple of definitions made earlier: G is called reductive

More information

A Lower Bound for the Size of Syntactically Multilinear Arithmetic Circuits

A Lower Bound for the Size of Syntactically Multilinear Arithmetic Circuits A Lower Bound for the Size of Syntactically Multilinear Arithmetic Circuits Ran Raz Amir Shpilka Amir Yehudayoff Abstract We construct an explicit polynomial f(x 1,..., x n ), with coefficients in {0,

More information

Structure of rings. Chapter Algebras

Structure of rings. Chapter Algebras Chapter 5 Structure of rings 5.1 Algebras It is time to introduce the notion of an algebra over a commutative ring. So let R be a commutative ring. An R-algebra is a ring A (unital as always) together

More information

Representation theory

Representation theory Representation theory Dr. Stuart Martin 2. Chapter 2: The Okounkov-Vershik approach These guys are Andrei Okounkov and Anatoly Vershik. The two papers appeared in 96 and 05. Here are the main steps: branching

More information

Eigenvalues, random walks and Ramanujan graphs

Eigenvalues, random walks and Ramanujan graphs Eigenvalues, random walks and Ramanujan graphs David Ellis 1 The Expander Mixing lemma We have seen that a bounded-degree graph is a good edge-expander if and only if if has large spectral gap If G = (V,

More information

Compute the Fourier transform on the first register to get x {0,1} n x 0.

Compute the Fourier transform on the first register to get x {0,1} n x 0. CS 94 Recursive Fourier Sampling, Simon s Algorithm /5/009 Spring 009 Lecture 3 1 Review Recall that we can write any classical circuit x f(x) as a reversible circuit R f. We can view R f as a unitary

More information

NOTES ON POLYNOMIAL REPRESENTATIONS OF GENERAL LINEAR GROUPS

NOTES ON POLYNOMIAL REPRESENTATIONS OF GENERAL LINEAR GROUPS NOTES ON POLYNOMIAL REPRESENTATIONS OF GENERAL LINEAR GROUPS MARK WILDON Contents 1. Definition of polynomial representations 1 2. Weight spaces 3 3. Definition of the Schur functor 7 4. Appendix: some

More information

Representation Theory

Representation Theory Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 Paper 1, Section II 19I 93 (a) Define the derived subgroup, G, of a finite group G. Show that if χ is a linear character

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

YOUNG TABLEAUX AND THE REPRESENTATIONS OF THE SYMMETRIC GROUP

YOUNG TABLEAUX AND THE REPRESENTATIONS OF THE SYMMETRIC GROUP YOUNG TABLEAUX AND THE REPRESENTATIONS OF THE SYMMETRIC GROUP YUFEI ZHAO ABSTRACT We explore an intimate connection between Young tableaux and representations of the symmetric group We describe the construction

More information

Lecture 6: Random Walks versus Independent Sampling

Lecture 6: Random Walks versus Independent Sampling Spectral Graph Theory and Applications WS 011/01 Lecture 6: Random Walks versus Independent Sampling Lecturer: Thomas Sauerwald & He Sun For many problems it is necessary to draw samples from some distribution

More information

An Algebraic View of the Relation between Largest Common Subtrees and Smallest Common Supertrees

An Algebraic View of the Relation between Largest Common Subtrees and Smallest Common Supertrees An Algebraic View of the Relation between Largest Common Subtrees and Smallest Common Supertrees Francesc Rosselló 1, Gabriel Valiente 2 1 Department of Mathematics and Computer Science, Research Institute

More information

IRREDUCIBLE REPRESENTATIONS OF SEMISIMPLE LIE ALGEBRAS. Contents

IRREDUCIBLE REPRESENTATIONS OF SEMISIMPLE LIE ALGEBRAS. Contents IRREDUCIBLE REPRESENTATIONS OF SEMISIMPLE LIE ALGEBRAS NEEL PATEL Abstract. The goal of this paper is to study the irreducible representations of semisimple Lie algebras. We will begin by considering two

More information

PSRGs via Random Walks on Graphs

PSRGs via Random Walks on Graphs Spectral Graph Theory Lecture 11 PSRGs via Random Walks on Graphs Daniel A. Spielman October 3, 2012 11.1 Overview There has been a lot of work on the design of Pseudo-Random Number Generators (PSRGs)

More information

Cographs; chordal graphs and tree decompositions

Cographs; chordal graphs and tree decompositions Cographs; chordal graphs and tree decompositions Zdeněk Dvořák September 14, 2015 Let us now proceed with some more interesting graph classes closed on induced subgraphs. 1 Cographs The class of cographs

More information

Chapter 5 The Witness Reduction Technique

Chapter 5 The Witness Reduction Technique Outline Chapter 5 The Technique Luke Dalessandro Rahul Krishna December 6, 2006 Outline Part I: Background Material Part II: Chapter 5 Outline of Part I 1 Notes On Our NP Computation Model NP Machines

More information

Higher-order Fourier analysis of F n p and the complexity of systems of linear forms

Higher-order Fourier analysis of F n p and the complexity of systems of linear forms Higher-order Fourier analysis of F n p and the complexity of systems of linear forms Hamed Hatami School of Computer Science, McGill University, Montréal, Canada hatami@cs.mcgill.ca Shachar Lovett School

More information

GROUPS AS GRAPHS. W. B. Vasantha Kandasamy Florentin Smarandache

GROUPS AS GRAPHS. W. B. Vasantha Kandasamy Florentin Smarandache GROUPS AS GRAPHS W. B. Vasantha Kandasamy Florentin Smarandache 009 GROUPS AS GRAPHS W. B. Vasantha Kandasamy e-mail: vasanthakandasamy@gmail.com web: http://mat.iitm.ac.in/~wbv www.vasantha.in Florentin

More information

INTRODUCTION TO REPRESENTATION THEORY AND CHARACTERS

INTRODUCTION TO REPRESENTATION THEORY AND CHARACTERS INTRODUCTION TO REPRESENTATION THEORY AND CHARACTERS HANMING ZHANG Abstract. In this paper, we will first build up a background for representation theory. We will then discuss some interesting topics in

More information

Recursive definitions on surreal numbers

Recursive definitions on surreal numbers Recursive definitions on surreal numbers Antongiulio Fornasiero 19th July 2005 Abstract Let No be Conway s class of surreal numbers. I will make explicit the notion of a function f on No recursively defined

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Definitions. Notations. Injective, Surjective and Bijective. Divides. Cartesian Product. Relations. Equivalence Relations

Definitions. Notations. Injective, Surjective and Bijective. Divides. Cartesian Product. Relations. Equivalence Relations Page 1 Definitions Tuesday, May 8, 2018 12:23 AM Notations " " means "equals, by definition" the set of all real numbers the set of integers Denote a function from a set to a set by Denote the image of

More information

Lecture 1: September 25, A quick reminder about random variables and convexity

Lecture 1: September 25, A quick reminder about random variables and convexity Information and Coding Theory Autumn 207 Lecturer: Madhur Tulsiani Lecture : September 25, 207 Administrivia This course will cover some basic concepts in information and coding theory, and their applications

More information

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 1: Quantum circuits and the abelian QFT

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 1: Quantum circuits and the abelian QFT Quantum algorithms (CO 78, Winter 008) Prof. Andrew Childs, University of Waterloo LECTURE : Quantum circuits and the abelian QFT This is a course on quantum algorithms. It is intended for graduate students

More information

Lecture 6: Finite Fields

Lecture 6: Finite Fields CCS Discrete Math I Professor: Padraic Bartlett Lecture 6: Finite Fields Week 6 UCSB 2014 It ain t what they call you, it s what you answer to. W. C. Fields 1 Fields In the next two weeks, we re going

More information

MIT Algebraic techniques and semidefinite optimization May 9, Lecture 21. Lecturer: Pablo A. Parrilo Scribe:???

MIT Algebraic techniques and semidefinite optimization May 9, Lecture 21. Lecturer: Pablo A. Parrilo Scribe:??? MIT 6.972 Algebraic techniques and semidefinite optimization May 9, 2006 Lecture 2 Lecturer: Pablo A. Parrilo Scribe:??? In this lecture we study techniques to exploit the symmetry that can be present

More information

A sequence of triangle-free pseudorandom graphs

A sequence of triangle-free pseudorandom graphs A sequence of triangle-free pseudorandom graphs David Conlon Abstract A construction of Alon yields a sequence of highly pseudorandom triangle-free graphs with edge density significantly higher than one

More information

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES MATRICES ARE SIMILAR TO TRIANGULAR MATRICES 1 Complex matrices Recall that the complex numbers are given by a + ib where a and b are real and i is the imaginary unity, ie, i 2 = 1 In what we describe below,

More information

32 Divisibility Theory in Integral Domains

32 Divisibility Theory in Integral Domains 3 Divisibility Theory in Integral Domains As we have already mentioned, the ring of integers is the prototype of integral domains. There is a divisibility relation on * : an integer b is said to be divisible

More information

Lectures 15: Cayley Graphs of Abelian Groups

Lectures 15: Cayley Graphs of Abelian Groups U.C. Berkeley CS294: Spectral Methods and Expanders Handout 15 Luca Trevisan March 14, 2016 Lectures 15: Cayley Graphs of Abelian Groups In which we show how to find the eigenvalues and eigenvectors of

More information

(Rgs) Rings Math 683L (Summer 2003)

(Rgs) Rings Math 683L (Summer 2003) (Rgs) Rings Math 683L (Summer 2003) We will first summarise the general results that we will need from the theory of rings. A unital ring, R, is a set equipped with two binary operations + and such that

More information

Lecture 8: Complete Problems for Other Complexity Classes

Lecture 8: Complete Problems for Other Complexity Classes IAS/PCMI Summer Session 2000 Clay Mathematics Undergraduate Program Basic Course on Computational Complexity Lecture 8: Complete Problems for Other Complexity Classes David Mix Barrington and Alexis Maciel

More information

Lecture 4: LMN Learning (Part 2)

Lecture 4: LMN Learning (Part 2) CS 294-114 Fine-Grained Compleity and Algorithms Sept 8, 2015 Lecture 4: LMN Learning (Part 2) Instructor: Russell Impagliazzo Scribe: Preetum Nakkiran 1 Overview Continuing from last lecture, we will

More information

Online Learning, Mistake Bounds, Perceptron Algorithm

Online Learning, Mistake Bounds, Perceptron Algorithm Online Learning, Mistake Bounds, Perceptron Algorithm 1 Online Learning So far the focus of the course has been on batch learning, where algorithms are presented with a sample of training data, from which

More information

Since G is a compact Lie group, we can apply Schur orthogonality to see that G χ π (g) 2 dg =

Since G is a compact Lie group, we can apply Schur orthogonality to see that G χ π (g) 2 dg = Problem 1 Show that if π is an irreducible representation of a compact lie group G then π is also irreducible. Give an example of a G and π such that π = π, and another for which π π. Is this true for

More information

Fourier analysis of boolean functions in quantum computation

Fourier analysis of boolean functions in quantum computation Fourier analysis of boolean functions in quantum computation Ashley Montanaro Centre for Quantum Information and Foundations, Department of Applied Mathematics and Theoretical Physics, University of Cambridge

More information