Random walk: notes. Contents. December 5, 2017

Size: px
Start display at page:

Download "Random walk: notes. Contents. December 5, 2017"

Transcription

1 Random walk: notes László Lovász December 5, 2017 Contents 1 Basics Random walks and finite Markov chains Matrices of graphs Stationary distribution Harmonic functions Times Return time Hitting time Commute time Cover time Universal traverse sequences Mixing Eigenvalues Coupling Random coloring of a graph Conductance Stopping rules Exit frequencies Mixing and ε-mixing Applications Volume computation What is a convex body? Lower bounds on the complexity Monte-Carlo algorithms Measurable Markov chains

2 5.2.3 Isoperimetry The ball walk Random spanning tree Basics 1.1 Random walks and finite Markov chains Let G = (V, E) be a connected finite graph with n 2 vertices and m edges. We usually assume that V = {1,..., n}. Let a(i, j) denote the number of edges connecting i and j. A random walk on G is an (infinite) sequence of random vertices v 0, v 1, v 2,..., where v 0 is chosen from some given initial probability distribution σ 0 on V (often concentrated on a single point) and for each t 0, v t+1 is obtained by choosing and edge from the uniform distribution on the set of edges incident with v t, and moving to its other endpoint. We denote σ t the distribution of v t : σ t i = P(vt = i). We denote by σ k the vector (σ k i : i V ). If we are at node i, then the probability of moving to node j is p ij = a(i, j)/n. In the case of a simple graph, { 1/d(i) if ij E(G), p ij = 0, otherwise. Note that p ij = 1 (i V ). (2) j V A nonnegative matrix P = (p ij ) n i,j=1 satisfying (2) defines a finite Markov chain. A walk of the chain is the random sequence (v 0, v 1, v 2,...) where v 0 is chosen from some initial probability distribution σ 0 and v t+1 is chosen from the probability distribution (p v t,j : All these choices are made independently. (1) j = 1,..., n). We call the Markov chain irreducible, if for every S V, S, V there are i S and j V \ S with p ij > 0. For the random walk on a graph, this just means that the graph is connected. Proposition 1.1 In every irreducible Markov chain, every node is visited infinitely often with probability 1. We conclude with some examples. Example 1.2 (Waiting Problem) Let A 1, A 2,... be a sequence of independent events, each with probability p. Let A N be the first that occurs. Then E(N) = 1/p. Proof: E(N) = p 1 + (1 p)(1 + E(N). 2

3 where This can be modeled as a Markov chain with two states W (for Wait) and S (for Success), p W,W = 1 p, p W,S = p, p S,W = 0, p S,S = 1. (The last two values are given only to have a full Markov chain, since we don t care what happens after reaching S.) An alternative proof would use the formula P(N = t) = (1 p) t 1 p. From this one could easily compute the expectation of N 2 and the variance of N: E(N 2 ) = p + 1 2p 2, Var(N) = p 1 p 2. Instead of the independence of the events A i, it would suffice to assume that P(A i A 1 A i 1 ) = p. If we only assume that P(A i A 1 A i 1 ) p, then the inequalities E(N) 1/p and E(N 2 ) (p + 1)/(2p 2 ) still follow. Example 1.3 (Gambler s Ruin) A gambler is betting on coin flips; so at every turn, it loses one dollar or gains one dollar. He starts with k dollars, and sets a target wealth of n > k dollars. He quits if he either reaches n dollars (a win) or loses all his money (a loss). What is his probability of winning? This can be phrased as a random walk on a path with nodes {0, 1,..., n}, starting at node k. Let f(k) be the probability of hitting n before 0. Then f(n) = 1, f(0) = 0, and f(k) = 1 2 f(k 1) + 1 f(k + 1) (1 k n 1). 2 So the numbers f(k) form an arithmetic progression, and hence f(k) = k/n. Example 1.4 (Coupon Collector Problem) A type of cereal box contains one of n different coupons, each with the same probability. How many boxes do you have to open, in expectation, in order to collect all of the different coupons? Let T i denote the first time when i coupons have been collected. So T 1 = 0 < T 2 = 1 <... < T n, and we want to determine E(T n ). The difference T k+1 T k is the number of boxes you open before finding a new coupon, when having k coupons already. This event has probability (n k)/n, independently of the previous steps. Hence by the Waiting Problem (Example 1.2), E(T k+1 T k ) = n n k, 3

4 and so the total expected time is where n 1 E(T n ) = E(T k+1 T k ) = k=0 n 1 k=0 n = nhar(n) n ln n, n k har(n) = n. This can be modeled as a random walk on a complete graph with a loop at each node. Example 1.5 (Card shuffling) The usual riffle shuffle can be modeled as a Markov chain on the set of all permutations of 52 cards. A step is generated by taking a random sequence ε 1 ε 2... ε 52 of 0 s and 1 s of length 52. If there are a 1 s in the sequence, we take off a cards from the top of the deck, place them to the right of the rest, and then merge the two piles so that the i-th card comes from the pile on the right if and only if ε i = 1. The question is, how many steps are needed to shuffle well, i.e., to get a permutation approximately uniformly distributed over all permutations? The surprising answer is: seven shuffle moves suffice. 1.2 Matrices of graphs We need the following theorem. We call an n n matrix irreducible, if it does not contain a k (n k) block of 0-s disjoint from the diagonal. Theorem 1.6 (Perron-Frobenius) If an n n matrix has nonnegative entries then it has a nonnegative real eigenvalue λ which has maximum absolute value among all eigenvalues. This eigenvalue λ has a nonnegative real eigenvector. If, in addition, the matrix is irreducible, then λ has multiplicity 1 and the corresponding eigenvector is positive (up to sign change). The adjacency matrix of a simple graph G is be defined as the n n matrix A = A G = (A ij ) in which A ij = { 1, if i and j are adjacent, 0, otherwise. If G has multiple edges, then we let A ij = a(i, j). We could also allow loops and include this information in the diagonal. In this course, a loop at node i adds one to the degree of the node. The Laplacian of the graph is defined as the n n matrix L = L G = (L ij ) in which { d(i), if i = j, L ij = a(i, j), if i j. So L = D A, where D = D G is the diagonal matrix of the degrees of G. Clearly L1 = 0. 4

5 Let P = D 1 A be the transition matrix of the random walk. Explicitly, (P ) ij = p ij = a(i, j) d(i). If G is d-regular, then P = (1/d)A. The matrix P is not symmetric in general, but the equation D 1/2 P D 1/2 = D 1/2 AD 1/2 shows that P is similar to the symmetric matrix P = D 1/2 AD 1/2. In particular, the eigenvalues λ 1 > λ 2 λ n of P are also eigenvalues of P, and hence they are real numbers. If w 1,..., w n are the corresponding orthonormal eigenvectors, then the spectral decomposition P = n λ k w k wk T k=1 gives the decomposition P = n λ k u k vk T, (3) k=1 where u k = D 1/2 w k are right eigenvectors and v k = w k ) T D 1/2 are left eigenvectors of P. We have u T k v l = 1(k = l), and u 1 = 1 and v 1 = π. We can express the distribution after t steps by a simple formula. We have σ (k+1) j = i σ (k) a(i, j) i d(i). This can be written as σ k+1 = P T σ k, and hence σ k = (P T ) k σ 0 = n λ k (u T k σ 0 )v k. (4) k=1 The matrix entry (P t ) ij is the probability that starting at i we reach j in t steps. Theorem is an eigenvalue of P if and only of G is bipartite. 1.3 Stationary distribution Considering a random walk on a graph G, define π i = d(i) 2m (i V ). (5) This is a probability distribution on V, called the stationary distribution of the chain. Note that π i p ij = 1 2m. 5 (6)

6 If we start the chain from initial distribution σ 0 = π, then σ 1 j = i π i p ij = d(j) 1 2m = π j, and repeating this, we see that the distribution σ t after any number of steps remains π. (This explains the name.) In the more general case of Markov chains, we cannot give such a simple definition, but the Perron Frobenius Theorem implies that there is a distribution (π i : i = 1,..., n) preserved by the chain, i.e., n π i p ij = π j. (7) j=1 So π is a left eigenvector of the transition matrix P, belonging to the eigenvalue 1. (The corresponding right eigenvector is 1.) It also follows from the Perron Frobenius Theorem that the stationary distribution of an irreducible Markov chain is unique. If G is regular, then the Markov chain is symmetric: p uv = p vu. We say that the Markov chain is time-reversible, if π u p uv = π v p vu for all u and v. The random walk on an (undirected) graph is time-reversible by (6). (Informally, this means that a random walk considered backwards is also a random walk.) Theorem 1.8 For the random walk on a non-bipartite graph, the distribution of v t tends to the stationary distribution as t. This is not true for bipartite graphs if n > 1, since v t is concentrated one one or the other color class, depending on the parity of t. Proof. By the Perron Frobenius Theorem, every eigenvalue of P is in the interval [ 1, 1]; if G is non-bipartite then it follows that 1 is not an eigenvalue. Using (18), we see that n σ t = λ k (u T k σ 0 )v k (u T 1 σ 0 )v 1 (t ). k=1 We know that u 1 = 1 and v 1 = π, and u T 1 σ 0 = i σ0 i = 1, so σt π as claimed. 1.4 Harmonic functions Let G be a connected simple graph, S V and f : harmonic at a node v V if 1 d(v) u N(v) V R. The function f is called a f(u) = f(v), (8) 6

7 asserting that the value of f at v is the average of its values at the neighbors of v. A node where a function is not harmonic is called a pole of the function. Another way of writing the definition is u N(v) (f(v) f(u)) = 0 v V \ S. (9) If we allow multiple edges, then the definition is 1 a(u, v)f(u) = f(v), (10) d(v) u V where a(u, v) is the multiplicity of the edge uv. Every constant function is harmonic at each node. On the other hand, Proposition 1.9 Every nonconstant function on the nodes of a connected graph has at least two poles. Proof. Let S be the set where the function assumes its maximum, and let S be the set of those nodes in S that are connected to any node outside S. Then every node in S must be a pole, since in (8), every value f(u) on the left had side is at most f(v), and at least one is less, so the average is less than f(v). Since the function is nonconstant, S is a nonempty proper subset of V, and since the graph is connected, S is nonempty. So there is a pole where the function attains its maximum. Similarly, there is another pole where it attains its minimum. For any two nodes there is a nonconstant harmonic function that is harmonic everywhere else. More generally, we have the following theorem. Theorem 1.10 For a connected simple graph G, nonempty set S V and function f 0 : S R, there is a unique function f : V R extending f 0 that is harmonic at each node of V \S. We call this function f the harmonic extension of f 0. Note that if S = 1, then the harmonic extension is a constant function (and so it is also harmonic at S, and it does not contradict Proposition 1.9). The uniqueness of the harmonic extension is easy by the argument in the proof of Proposition 1.9. Suppose that f and f are two harmonic extensions of f 0. Then g = f f is harmonic on V \ S, and satisfies g(v) = 0 at each v S. If g is the identically 0 function, then f = f as claimed. Else, either its minimum or its maximum is different from 0. But we have seen that both the minimizers and the maximizers contain at least one pole, which is a contradiction. To prove the existence, we describe three constructions, which all will be useful. 7

8 (a) Let u be the (random) point where a random walk starting at vertex v hits S (we know this happens almost surely), and let f(v) = E(f 0 (u)). Then f is a harmonic extension of f 0. (b) Consider the graph G as an electrical network, where each edge represents a unit resistance. Keep node u S at electric potential f 0 (u), and define f(v) as the potential of node v. Then f is a harmonic extension of f 0. (Use Kirchhoff s Laws.) (c) Consider the edges of the graph G as ideal rubber bands with unit Hooke constant (i.e., it takes h units of force to stretch them to length h). Let us nail down each node u S to point f 0 (u) on the real line, and let the graph find its equilibrium. The energy is a positive definite quadratic form of the positions of the nodes, and so there is a unique minimizing position, which is in equilibrium. The positions of the nodes define a harmonic extension of f 0. Example 1.11 Let S = {a, b} and f 0 (a) = 0, f 0 (b) = 1. Let f be the harmonic extension. Then f(v) is the probability that a random walk staring at v hits b before a. We can extend the notion of harmonic functions to Markov chains. We say that a function f : V R is harmonic at node i, if p ij f(j) = f(i). j Essentially the same proof as for random walks implies that if a function on the nodes of an irreducible Markov chain is harmonic at every node, then the function is constant. Lemma 1.12 Let us stretch two nodes a, b of a graph G in the rubber band model to distance 1. Let F (a, b) be the force needed for this, and let f(u) be the position of node u if a is at point 0 and b is at point 1. (a) The effective resistance in the electrical network between a and b is 1/F (a, b). (b) F (a, b) = ij E (f(i) f(j))2. Proof. (a) We have F (a, b) = i N(a) f(i), since a neighbor i of a pulls a with force f(a). On the other hand, if we fix the potentials f(a) = 0 and f(b) = 1, then by Ohm s Law, the current through an edge ai is f(i). Hence the current through the network is i N(a) f(i) = F (a, b), and by Ohm s Law, the effective resistance is 1/F (a, b). (b) For the rubber band model, imagine that we slowly stretch the graph until nodes a and b will be at distance 1. When they are at distance t, the force pulling our hands is tf (a, b), and hence the energy we have to spend is 1 0 tf (a, b) dt = 1 F (a, b). 2 8

9 This energy accumulates in the rubber bands. By a similar argument, the energy stored in the rubber band ij is (f(i) f(j)) 2 /2. By conservation of energy, we get the identity (f(i) f(j)) 2 = F (a, b). (11) ij E Exercise 1.13 For a finite Markov chain, its underlying graph is obtained by connecting two states i and j by an edge if either p ij > 0 or p ji > 0. Prove that if the underlying graph of a Markov chain is a tree, then the chain is time-reversible. Exercise 1.14 Let G be a bipartite graph with bipartition {U, W }. Let us start a random walk from a node u U. Prove that for i U, σ 2t i d(i) m (t ). Exercise 1.15 Let G be the standard grid graph in the plane (defined on lattice points, where two of them are connected by an edge if their distance is 1; so G is countably infinite), and let f be a non-negative valued harmonic function on G. Prove that f is constant. 2 Times The return time R u to node u is the expected number of steps of the random walk starting at u before it returns to u. The hitting time H(u, v) (also called access time) from vertex u to vertex v is the expected number of steps of a random walk starting at u before visiting v. H max = max u,v H(u, v). We set The commute time comm(u, v) between vertices u and v is the expected number of steps of a random walk starting at u before visiting v and returning to u. Clearly comm(u, v) = comm(v, u) = H(u, v) + H(v, u). The cover time C(u) from vertex u is the expected number of steps of a random walk starting at u before every vertex is visited. We set C max = max u C(u). Warning: often the hitting time is defined as a random variable, the number of steps of the random walk starting at u before visiting v (which depends on the random walk), and the number H(u, v) is called the expected hitting time or mean hitting time; similarly for comm etc. Example 1.2 concerns hitting time, Example 1.4 concerns cover time, while Example 1.5 is about mixing time to be discussed later. 2.1 Return time We start with a technical lemma. 9

10 Lemma 2.1 Let v 0, v 1,... be a walk in a finite irreducible Markov chain started a node u. Then the random variable T = min{t : v t = w} has finite expectation and variance for any node w. Proof. Between any two nodes u and w there is a path u 0 = u, u 1,..., u k = w of length k < n such that p ui,u i+1 > 0 for every 0 i < k. Therefore there is an ε > 0 such that the probability that we visit w within n steps is at least ε. This holds for the next stretch of n steps etc. By the Waiting Problem (Example 1.2), the expected number of stretches to wait is at most 1/ε, so E(T) < 1/ε is finite. Similarly, E(T 2 ) is finite, and hence so is Var(T) = E(T 2 ) E(T) 2. This lemma implies that R u is finite and well-defined. Theorem 2.2 For every finite Markov chain and every node u, R u = 1/π u. Proof. Before giving an exact proof, let us describe a simple heuristic while this is true. Consider a very long random walk v 0, v 1,..., v T started from the stationary distribution. Then P(v t = u) = π u, and so the expected number of visits to u is T π u. The expected time between two consecutive visits is R u, so the total time T is about T π u R u. So T = T π u R u, and π u R u = 1. To make this precise, Let N be a large positive integer, let ε > 0, and set t = (1 ε)rn. Start a random walk from the stationary distribution. Let T k denote the time of the k-th visit to u, and let X denote the number of visits before time t. We have E(T k+1 T k ) = R u, and the differences T k+1 T k are independent identically distributed random variables with finite expectation and variance by Lemma 2.1, hence 1 N T N = 1 N T N 1 (T k+1 T k ) N k=1 will be arbitrarily close to R if N is large enough, with probability arbitrarily close to 1. This means that we probability at least 1 ε, T N t, and in such cases, X N. Thus E(X) P(X N)N + P(X > N)t N + εt. On the other hand, we have E(X) = tπ u, and so (π u ε)t = (π u ε)(1 ε)r u N N. Dividing by N and letting ε 0, we get π u R u 1. similarly. The inequality π u R u 1 follows Corollary 2.3 For the random walk on a graph starting from node v, the expected number of steps before an edge uv is passed (in this direction) is 2m. 10

11 2.2 Hitting time There is a basic equation for hitting times: H(k, j), if i j, H(i, j) = d(i) k N(i) 0, if i = j. (12) Note that d(i) k N(i) H(k, i) = R u = 1 π u is the return time to i, so in terms of the matrix H = (H(i, j)) n i,j=1, we can write (12) as (I P )H = J R, (13) where R is the diagonal matrix with the return times in the diagonal. We can give a nice geometric interpretation. Consider the graph as a rubber band structure, and attach a weight of d(v) to each node v. Nail the node b to the wall and let the graph find its equilibrium. Each node v will be at a distance of H(v, b) below b. The following two examples are easy to verify using this geometric interpretation. Example 2.4 The hitting time on a path of length n from one endpoint to the other is n 2 ; more generally, from a node at distance k from an endpoint v to v is n 2 (n k) 2 = k(2n k). Example 2.5 The hitting time between two nodes at distance k of a circuit of length n is k(n k). Example 2.6 The hitting time on the 3-dimensional cube from one vertex to the opposite one is 10. Example 2.7 Take a clique of size n/2 and attach to it an endpoint of a path of length n/2. Let i be the attachment point, and j, the free endpoint of the path. Then H(i, j) = n3 8. This last example is the worst for trying to hit a node as soon as possible, at least up to a constant. Theorem 2.8 For the random walk on a connected graph, for any two nodes at distance r, we have H(i, j) < 2mr < n 3. 11

12 Proof. For two adjacent nodes i and j, starting from i, in expected 2m time the edge ij is passed from j to i, so H(i, j) < 2m. More generally, let (i = v 0, v 1,..., v r = j) be a path, and let T k be the first time when v 0,..., v k have been visited. By the above, E(T k+1 T k ) < 2m, and hence E(T r ) < 2mr < n 3. Example 2.9 H(u, v) is not a symmetric function, even for time-reversible chains (even random walks on undirected graphs): for the first two nodes u and v on a path of length n, we have H(u, v) = 1 but H(v, u) = 2n 3. The H(u, v) may be different from H(v, u) even on a regular graph (Exercise 2.24). However, if the graph has a node-transitive automorphism group, then H(u, v) = H(v, u) for any two nodes (Corollary 2.12). Lemma 2.10 (Cycle Reversal Lemma) For the random walk on a connected graph, for any three nodes u, v and w, H(u, v) + H(v, w) + H(w, u) = H(u, w) + H(w, v) + H(v, u). Proof. Starting a random walk at u, walk until v is visited; then walk until w is visited; then walk until u is reached. Call this random sequence a uvwu-tour. The expected number of steps in a uvwu-tour is H(u, v) + H(v, w) + H(w, u). On the other hand, we can express this number as follows. Let W = (u 0, u 1,..., u N = u 0 ) be a closed walk. The probability that we have walked exactly this way is P(W ) = N 1 i=0 1 d(u i ), which is independent of the starting point and remains the same if we reverse the order. Let a(w ) denote the number of ways this closed walk arises as a uvwu-tour, i.e., the number of occurrences of u in W where we can start W to get a uvwu-tour (note that the same value would be obtained by considering v or w instead of u). We shall show that the number of ways the reverse closed walk W = (u N, u N 1,..., u 0 = u N ) arises as a uwvu-tour is also a(w ). Since the expected length of a uvwu-tour is W p(w )a(w ) W, this will prove the identity in the problem. (It will also follow that a(w ) is 1 or 2.) Call an occurrence of u in the closed walk W forward good if starting from u and following the walk until v occurs, then following it until w occurs, then following it until u occurs, we traverse the whole walk exactly once. Call this occurrence backward good if this holds with the orientation of W as well as the role of v and w reversed. Clearly a(w ) is the number of forward good occurrences of u, so it suffices to verify that for every closed walk W, the number of forward good occurrences of u is the same as the number of backward good occurrences. (Note that a forward good occurrence need not be backward good.) Assume that W arises as a uvwu-tour at least once; say u 0 = u, u i = v, and u j = w (0 < i < j < N), where W 1 = {u 1,..., u i 1 } does not contain v, W 2 = {u i+1,..., u j 1 } 12

13 does not contain w and W 3 = {u j+1,..., u N 1 } does not contain u. Assume first that W 2 does not contain u either. Then u 0 is the only forward good occurrence of u, and the last occurrence of u in W 1 is the only backward good occurrence. Second, assume that W 2 contains u. Similarly, we may assume that W 3 contains v and W 1 contains w. Let u t be the last occurrence of u in W 2. It is easy to check that u t is backward good. So we see that if W arises as a uvwu-tour then it also arises as a uwvu-tour. Assume now that a(w ) > 1. Then there must be a second forward good element, and it is easy to check that this can only be the first occurrence u s of u on W 2 ; it also follows that all occurrences of v on W 2 must come before u s, and similarly, all occurrences of w on W 3 must come before the first occurrence of v on W 3, and all occurrences of u on W 1 must come before the first occurrence of w on W 1. But in this case there are exactly two forward good and exactly two backward good occurrences of u. So a(w ) = a(w ) = 2. Corollary 2.11 The vertices of any graph can be ordered so that if u precedes v then H(u, v) H(v, u). Proof. Fix any node u, and define and ordering (v 1,..., v n ) of the nodes so that H(u, v 1 ) H(v 1, u) H(u, v 2 ) H(v 2, u) H(u, v n ) H(v n, u). Then by Lemma 2.10, H(v i, v j ) + H(v j, u) + H(u, v i ) = H(v j, v i ) + H(v i, u) + H(u, v j ), and so for i < j, H(v j, v i ) H(v i, v j ) = (H(u, v i ) H(v i, u)) (H(u, v j ) H(v j, u)) 0. Corollary 2.12 If a graph has a node-transitive automorphism group, then H(u, v) = H(v, u) for any two nodes. We express hitting times in terms of the transition matrix. By (18), we have I P = n (1 λ k )u k vk T. k=2 Consider the matrix (I P ) = n k=2 1 1 λ k u k v T k. 13

14 This is a pseudoinverse of I P : the matrix I P is singular, so it does not have a proper inverse, but we have (I P )(I P ) (I P ) = I P, (I P ) (I P )(I P ) = (I P ), and it is easy to check that (I P ) (I P ) and (I P )(I P ) are symmetric matrices. Furthermore, (I P ) 1 = (I P ) u 1 = 0 from the orthogonality relations, and so (I P ) J = 0. We can compute (I P ) by making it nonsingular, and then inverting it: (I P ) = (I P + x1π T ) 1 1 x 1πT with an arbitrary x 0. Let S = (I P ) R, where R is the diagonal matrix with the return times in the diagonal. Using that R = 2mD 1, we can write this as G = 2mD 1/2 (I D 1/2 P D 1/2 ) D 1/2 = 2mD 1/2 (I P ) D 1/2, showing that G is a symmetric matrix. Lemma 2.13 H(i, j) = G ij G jj. Proof. Recall the equation (I P )H = J R. (14) Unfortunately, the matrix I P is singular, and so (14) does not uniquely determine H. But using the generalized inverse, (I P )H = (I P )(I P ) (I P )H = (I P )(I P ) (J R) = (I P )(I P ) R, and hence (I P )(H + (I P ) R) = 0. The left nullspace of I P consists of the multiples of 1, hence (I P )X = 0 implies that every column of X is constant. So H = (I P ) R + 1g T = G + 1g T. (15) with some vector g. Using that H ii = 0, we get that g i = G ii. Corollary 2.14 Let λ 1 = 1 > λ 2... λ n be the eigenvalues of the transition matrix P of the random walk on graph G, and let u 1,..., u n, v 1,..., v n be the corresponding right and left eigenvectors. Then H(i, j) = 2m n k=2 u kj v kj u ki v kj. d(j)(1 λ k ) 14

15 Proof. We have G ij = ((I P ) R) ij = n k=2 u ki v kj R j 1 λ k = 2m n k=2 u ki v kj d(j)(1 λ k ). Substituting in the formula of Lemma 2.13, the corollary follows. Lemma 2.15 (Random Target Lemma) For every Markov chain, π j H(i, j) = N (16) j is independent of the starting node i. Proof. Let f(i) = j π jh(i, j); we show that f is harmonic. Indeed, for every node i, p ij f(j) = p ij π k H(j, k) = π k p ij H(j, k) j j k k j = π k (H(i, k) 1) + π i (R i 1) = π k H(i, k) = f(i). k i k Hence In the case of random walks on a graph, we can express N by the spectrum. Using (15), Hπ = (I P ) Rπ + 1g T π = (I P ) 1 + (g T π)1 = (g T π)1. N = g T π = n k=2 1 1 λ k. 2.3 Commute time Lemma 2.16 The probability that a random walk on a graph starting at u visits v before returning to u is R u /comm(u, v)). Proof. Let T be the first time the random walk starting at u returns to u, and let S be the first time it returns to u after visiting v. Then E(T) = R u and E(S) = comm(u, v). Clearly S T, and equality holds if and only if the random walk visits v before returning to u. Hence Thus E(S T) = p 0 + (1 p) comm(u, v). comm(u, v) = E(S) = E(T) + E(S T) = R u + (1 p)comm(u, v), and the lemma follows. 15

16 Lemma 2.17 If a connected graph G is considered as an electrical network (with unit resistances on the edges), then the effective resistance R(u, v) between nodes u and v is 2m/comm(u, v). Proof. Let φ : V R be the (unique) function that satisfies φ(u) = 0, φ(v) = 1, and is harmonic at the other nodes. Let us keep node u at potential 0 and node v at potential 1. By Ohm s Law, the current on an edge ij, in the direction from i to j, is φ(j) φ(i). Hence the total current from u to v is j N(u) φ(j), and the effective resistance of the network is ( R(u, v) = j N(u) φ(j)) 1. (17) On the other hand, for every j, φ(j) is the probability that the walk hits v before u. Hence (1/d(u)) j N(u) φ(j) is the probability that starting from u, we visit v before returning to u. Thus by Lemma 2.16, 1 d(u) j N(u) φ(j) = R u comm(u, v). Together with (17), this proves the lemma. Theorem 2.18 For any two nodes at distance r, comm(i, j) < 4mr < 2n 3. This follows similarly as Theorem Cover time Example 2.19 Assuming that we start from 0, the cover time of the path on n nodes will also be (n 1) 2, since it suffices to reach the other endnode. Example 2.20 To determine the cover time C n of a cycle of length n, note that it is the same as the time needed on a very long path, starting from the midpoint, to visit n nodes. We have to reach first n 1 nodes, which takes C n 1 steps on the average. At this point, we have a subpath with n 1 nodes covered, and we are sitting at one of its endpoints. To reach a new node means to reach one of the endnodes of a path with n + 1 nodes from a neighbor of an endnode. Clearly, this is the same as the hitting time between two consecutive nodes of a circuit of length n, which is one less than the return time to the second node. This leads to the recurrence C n = C n 1 + (n 1). Hence C n = n(n 1)/2. 16

17 Theorem 2.21 C max < 2nm. Let H max = max u,v H(u, v) and H min = min u,v H(u, v). Theorem 2.22 har(n)h min C max har(n)h max. Proof. Let Let (σ 1,..., σ n ) be a random permutation of the vertices, and let A k be the event that σ k is the last visited node of {σ 1,..., σ k }. Then P(A k ) = 1 k. Note: this is independent of the walk. Let T k be the first time σ 1,..., σ k are all visited, then E(T k T k 1 A k ) H max, and E(T k T k 1 A k ) = 0. Hence E(T k T k 1 ) 1 k H max + k 1 k 0 = H max. k Summing over all k we get the upper bound in the theorem. similarly. The lower bound follows 2.5 Universal traverse sequences Let G be a connected d-regular graph, u V (G), and assume that at each node, the ends of the edges incident with the node are labeled 1, 2,..., d. A traverse sequence (for this graph, starting point, and labeling) is a sequence (h 1, h 2,..., h t ) {1,..., d} t such that if we start a walk at v 0 = u and at the i th step, we leave the current vertex through the edge labeled h i, then we visit every vertex. A universal traverse sequence is a sequence which is a traverse sequence for every connected d-regular graph on n vertices, every labeling of it, and every starting point. Theorem 2.23 For every d 2 and n 2, there exists a universal traverse sequence of length O(d 2 n 4 ). Proof. The construction is easy: we consider a random sequence. More exactly, let t = 8dn 3 log n, and let H = (h 1,..., h t ) be randomly chosen from {1,..., d} t. For a fixed G, starting point, and labeling, the walk defined by H is just a random walk; so the probability p that H is not a traverse sequence is the same as the probability that a random walk of length t does not visit all nodes. 17

18 By Theorem 2.21, the expected time needed to visit all nodes is at most dn 2. Hence (by Markov s Inequality) the probability that after 2dn 2 steps we have not seen all nodes is less than 1/2. Since we may consider the next 2dn 2 steps as another random walk etc., the probability that we have not seen all nodes after t steps is less than 2 t/(4n2) = n 2nd. Now the total number of d-regular graphs G on n nodes, with the ends of the edges labeled, is less than n dn (less than n d choices at each node), and so the probability that H is not a traverse sequence for one of these graphs, with some starting point, is less than nn nd n 2nd < 1. So at least one sequence of length t is a universal traverse sequence. Exercise 2.24 Show by an example that H(u, v) can be different from H(v, u) even for two nodes of a regular graph. Exercise 2.25 Consider random walk on a connected regular graph G. (a) The average of H(s, t) over all s N(t) is exactly n 1. (b) The average of H(s, t) over all s V (G) \ {t} is at least n 1. (c) The average of H(t, s) over all s V (G), with weights π s, is at least n 1. Exercise 2.26 Prove that the mean hitting time between two antipodal vertices of the k-cube Q k is asymptotically 2 k. Exercise 2.27 What is the cover time of the path when starting from an internal node? Exercise 2.28 The mean commute time between any pair of vertices of a d- regular graph is at least n and at most 2nd/(d λ 2). Exercise 2.29 Let G denote the graph obtained from G by identifying s and t, and let T (G) denote the number of spanning trees of G. Prove that R st = T (G )/T (G). Exercise 2.30 (Raleigh s Principle) Adding any edge to a graph G does not increase the resistance R st. Exercise 2.31 Let G be obtained from the graph G by adding a new edge (a, b), and let s, t V (G). (a) Prove that the mean commute time between s and t in G is not larger than the mean commute time in G. (b) Show by an example that similar assertion is not valid for the mean hitting time. (c) If a = t then the mean hitting time from s to t is not larger in G than in G. Exercise 2.32 Find a formula for the commute time in terms of the spectrum. Exercise 2.33 Prove that the commute time between two nodes of a regular graph is at least n. Exercise 2.34 Let l(u, v) denote the probability that a random walk starting at u visits every vertex before hitting v. (a) Prove that if G is a circuit of length n then l(u, v) = 1/(n 1) for all u v. (b) If u and v are two non-adjacent vertices of a connected graph G such that {u, v} is not a cutset then there is a neighbor w of u such that l(w, v) < l(u, v). (c) Assume that for every pair u v V (G), l(u, v) = 1/(n 1). Show that G is either a circuit or a complete graph. 18

19 Exercise 2.35 Let N(u, v) denote the expected number of nodes a random walk from u visits before v (including u; not the number of steps!). Prove that for every u V (G) there exists a v V (G) \ {u} such that N(u, v) n/2. Exercise 2.36 Let b be the expected number of steps before our random walk visits more than half of the vertices. Prove that b 2H max. Exercise 2.37 For S V, define H S min = min u,v A H(u, v). Prove that C max har( A )H A min. 3 Mixing Perhaps the most important use of random walks in practice is sampling: choosing a random element from a prescribed distribution over a large set. A general method for sampling from a probability distribution π over a large and complicated set V is the following. We define a graph G with node set V whose stationary distribution is π. Let us assume the graph is not bipartite (for example, consider the lazy walk on it). By Theorem 1.8, σ t π as t for every starting distribution σ. This means that by simulating a random walk on G for sufficiently many steps, we get a point whose distribution is close to π. The question is, what does close mean, and how long do we have to walk? The total variation distance between two probability distributions on the same finite set V is defined by d var (σ, τ) = max(σ(a) τ(a)). A V Since σ(a) τ(a) = τ(v \ A) σ(v \ A), we could define the same value as the maximum of τ(a) σ(a), or as the maximum of σ(a) τ(a). It could be expressed as d var (σ, τ) = 1 σ i τ i. 2 i V The ε-mixing time is defined as the smallest t such that d var (σ t, π) ε for every starting distribution σ. concentrated on a single node. It is easy to see that the worst starting distribution is This definition makes sense only for non-bipartite graphs. To avoid this exception, we often consider the lazy walk: At each step, we flip a coin and we stay if its Head, and move according to the random walk if it is Tail. This can be described as the random walk on a graph where we add d(i) loops to each node i. (Recall that adding a loop increases the degree by 1 only.) Making the random walk lazy does not change the stationary distribution. It doubles all the previous times, since we are idle in half of the steps in expectation. 19

20 The transition matrix P of the lazy random walk can be expressed by the transition matrix P of the original walk very simply: P = 1 (P + I). 2 Since all eigenvalues of P are 1, all eigenvalues of P are nonnegative. In other words, the symmetrized version is positive semidefinite. Example 3.1 On a complete graph K n, starting from a given node, we have d var (σ 1, π) = 1 n. More generally, d var (σ t, π) = (exercise??). 1 n(n 1) t 1 The ε-mixing time is in general difficult to compute, even to estimate. We are going to discuss four different methods. 3.1 Eigenvalues Theorem 3.2 Let G be a connected graph and let λ 1 = 1 > λ 2 λ n be the eigenvalues of its transition matrix. Set λ = max{λ 2, λ n } and π min = min i π i. Then σ t π(s) (S) π(s) λ t. π min If we start at node i, then p t πj ij π j λ t. π i Proof. Consider the spectral decomposition of P : P = n λ k u k vk T, (18) k=1 where u k and v k are the right and left eigenvectors of P, with u 1 = 1 and v 1 = π. So and hence σ t (S) = σ T P t 1 S == n λ t kσ T u k vk T 1 S = π(s) + k=1 n λ t kσ T u k v k 1 S, k=2 n n σ t (S) π(s) λ t σ T u k vk T 1 S λ t σ T u k vk T 1 S k=2 20 k=1

21 We can estimate this using Cauchy Schwarz: ( n n ) 1/2 ( n ) 1/2 σ T u k vk T 1 S σ T u k 2 vk T 1 S 2. k=1 k=1 Using that (w k ) n k=1 is an orthonormal basis, n σ T u k 2 = k=1 and similarly Thus k=1 k=1 ( n n ) 2 σ T Π 1/2 w k 2 = Π 1/2 σ 2 σ i = 1, (19) πi π min n vk T 1 S 2 = Π 1/2 1 S 2 = π i = π(s). i S k=1 σ t (S) π(s) λ t 1 πmin π(s) i=1 as claimed. If σ is concentrated on node i, then we don t have to make the last step in (19). Example 3.3 Consider the k-cube as above. For a {0, 1} k, consider the vector v a R Q k defined by v a x = ( 1) atx. It is easy to check that this is an eigenvector of P with eigenvalue 1 2 a 1 k, where a 1 = k i=1 a i. Since there are 2 k such vectors v a, these are all eigenvectors. So the eigenvalues of P are 1, 1 2 k, 1 4 k,..., 1 2k k = 1. So the eigenvalues of the lazy walk P lazy are 1, 1 1 k, 1 2 k,..., 1 k k = 0. Hence by Theorem 3.2, we have d var (σ t, π) 2 k ( 1 1 k ) t < 2 k/2 e t/k. So to t = k 2 + Ck, we have d var (σ t, π) < (2/e) k/2 e C. 21

22 3.2 Coupling A coupling in a Markov chain is a sequence of random pairs of nodes ((v 0, u 0 ), (v 1, u 1 ),... ) such that each of the sequences (v 0, v 1,... ) and (u 0, u 1,... ) is a walk of the chain. Lemma 3.4 (Coupling Lemma) Let (v 0, v 1,... ) be random walk starting from distribution σ, and let (u 0, u 1,... ) be another random walk starting from the stationary distribution π. Then d var (σ t, π) P(v t u t ). If we can couple the two random walks so that P(v t u t ) < ε for some t, then we get a bound on the ε-mixing time. Proof. Clearly u t is stationary for every t 0. Hence for every S V, P(v t S) π(s) = P(v t S) P(u t S) = P(v t S u t = v t )P(u t = v t ) + P(v t S u t v t )P(u t v t ) P(u t S u t = v t )P(u t = v t ) P(u t S u t v t )P(u t v t ) = (P(v t S u t v t ) P(u t S u t v t ))P(u t v t ) P(u t v t ). Example 3.5 Consider a random walk on the k-cube Q k. Since this is bipartite, we look at the lazy version. The vertices of the cube are sequences (x 1, x 2,..., x k ), where x i {0, 1}. Let us assume that we start at vertex (0,..., 0). Then the random walk can be described as being at (x 1, x 2,..., x k ), we choose a coordinate 1 i k, and flip it with probability 1/2. So a tour of length t is described by a sequence (i 1,..., i t ) (i s {1,..., k}), and a sequence (ε 1,..., ε t ) (ε s {0, 1}). Step s consists of adding ε s to x is modulo 2. If we condition on the sequence (i 1,..., i t ), then each x i for which i {i 1,..., i t } will be uniformly distributed over {0, 1}, and these distributions are independent. So conditioning on {i 1,..., i t } = {1,..., k}, the distribution will be uniform on all vertices of the cube. The probability that {i 1,..., i t } {1,..., k} can be estimated from the Coupon Collector s Problem: The expectation of the first t for which {i 1,..., i t } = {1,..., k} is khar(k), and so the probability that in 2khar(k) steps we have not seen all directions is at most 1/2 by Markov s Inequality. It follows that after 2Nkhar(k), the probability that {i 1,..., i t } {1,..., k} is at most 2 N. Hence the ε-mixing time is at most log 2 (1/ε)khar(k). Compare this time bound with Exercise 2.26: the mean hitting time between antipodal vertices of Q k is approximately 2 k. Consider the lazy walk v 0, v 1,... on the k-cube Q k started from an arbitrary distribution σ 0. We couple it with the lazy random walk u 0, u 1,... started from the stationary 22

23 (uniform) distribution π. Recall that the random walk v 0, v 1,... is determined by two random sequences: a sequence (i 1, i 2... ) (i s {1,..., k}) of direction, and a sequence (ε 1, ε 2,... ) (ε s {0, 1}) go-or-wait decisions. Let us use the same sequence of directions for u 0, u 1,..., but choose (ε 1, ε 2,... ) as follows. At a given time t, let v t = (x 1,..., x k ) and u t = (y 1,..., y k ). If x it = y it, then let ε t = ε t ; else, let ε t = 1 ε t. After this step, the i i -th coordinate of u t is equal to the i t -th coordinate of v t, and they remain equal forever. So if all coordinates have been chosen, then u t = v t. By the same computation as before, we see that after t = 2Ck ln k steps, we have d var (σ t, π) P(u t v t ) 1 C. Example 3.6 Consider the lazy walk v 0, v 1,... on the path of length n started from an arbitrary distribution σ 0. We couple it with the lazy random walk u 0, u 1,... started from the stationary distribution π. A step of the random walk is generated by a coin flip interpreted as Left or Right, and another coin flip interpreted as Go or Stay. If we are at an endpoint of the path, then both Left and Right mean the same move. It will be convenient to assume that u t and v t start at an even distance from each other. This can be achieved by interpreting the first Go and Stay outcome differently if necessary. From here on, the two walks will Go and Stay at the same time. The coupling of the directions is also very simple: the two moves are independent until the two walks collide, and from then on, they stay together. Suppose that u 0 is to the right of v 0. Then by the time v t reaches the right end of the path, the two walks must have collided. This takes at most n 2 expected time. By Markov s Inequality, the probability that v t u t for t = 2n 2 is less than 1/2, and so after t = 2n 2 log(1/e) steps, d var (σ t, π) P(u t v t ) ε. Note that if we start from an endpoint of the path, and d var (σ t, π) < 1/4 for some t, then the walk must end up in the right half of the path. If T is the first time it reaches the midpoint (say, n is even), then E(T ) = n 2 /4. With some work, one can deduce from this that d var (σ t, π) < 1/4 implies that t > n 2 / Random coloring of a graph We want to choose a uniformly random k-coloring of a graph G (such that adjacent nodes get different colors). The method we describe will work when the maximum degree D of G satisfies k > 3D. We define a graph H whose nodes are the k-colorations of G, with two of them connected by an edge if they differ at a single node. The degrees of H are bounded by kn; we make it kn-regular by adding appropriately many loops at the nodes. 23

24 While the graph itself may be exponentially large, it is easy to generate a random walk on H. If we have a k-coloring α, we select a uniformly random node n of G, and select a uniformly random color i; the pair (v, i) is the seed of the step. The new coloring α is defined by α (u) = { i ha u = v, α(u) otherwise, if this is a legal coloring; else, let α = α 0. This random walk on colorings is called Glauber dynamics. Theorem 3.7 If D > 3k, then starting from any k-coloring α, we have d var (α t, π) < e t/(kn) n. So if t = kn ln(n/ε), then d var (α t, π) < ε. Proof. We use the Coupling Lemma 3.4. Starting from two colorings α 0 and β 0 (where β 0 is uniformly random), we construct two random walks α 0, α 1,... and β 0, β 1,... by using the same random seed in both. We want to show that P(α t β t ) ne t/(kn). (20) Let U t = {v V : α t (v) β t (v)}, W t = V \ U t and X t = U t. For v V, let a(v) denote the number of edges joining v to the other class of the partition {U t, W t }. We claim that E(X t+1 X t ) ( 1 1 ) X t. (21) kn Indeed, let (v, i) be the seed at step t, and let us fix v. If v U t, then X t 1 X t+1 X t, and we have X t 1 = X t+1 if color i does not occur among the neighbors of v in either coloring α t or β t. Since at least a(v) colors are common, the probability of this is P(X t+1 = X t 1 v) k 2d(v) + a(v). (22) k Ha v W t, then X t X t+1 X t +1, and we lose only if color i occurs among the neighbors of v in exactly one of the colorings α t and β t. There are at most 2a(v) such colors, and so Thus P(X t+1 = X t + 1 v) 2a(v) k. (23) E(X t+1 X t ) X t k 2d(i) + a(i) + 2a(i) kn kn. i U t i W t 24

25 Using that a(i) = a(i), i U t i W t we get E(X t+1 X t ) X t k 2d(i) a(i) X t k 3d(i). kn kn i U t i U t The last sum is at most X t /(kn), and so (21) follows. From (21) we get by induction E(X t ) ( 1 1 ) tn, kn and hence by Markov s Inequality, ( P(α s β s ) = P(X t 1) 1 1 ) tn < e t/(kn) n. kn 3.3 Conductance The conductance of a Markov chain is defined by i A j V \A Φ = min π ip ij, A π(a)π(v \ A) where the minimum is extended over all non-empty proper subsets A of V. The numerator is the frequency with which a very long random walk steps from A to V \A. The numerator is the frequency with which a very long sequence of independent random nodes from the stationary distribution steps from A to V \ A. So the ratio measures how strongly non-independent the nodes in a random walk are. Let (say) π(a) 1/2, then π(v \ A) 1/2 and i A and hence Φ 2. j V \A p ij π i i A π i p ij = π i = π(a), i A j V In the case of a random walk on a graph, we have π i p ij = 1/(2m) for every edge ij. Let e(a, V \ A) denote the number of edges connecting A to V \ A, then time. Φ = min A e(a, V \ A) 2mπ(A)π(V \ A). We can use the conductance to bound the eigenvalue gap and through this, the ε-mixing 25

26 Theorem 3.8 Φ λ 2 Φ. We need a couple of lemma. The following formula for the eigenvalue gap in the case of random walk on a graph follows from more general results in linear algebra, but we describe a direct proof. Lemma λ 2 = 1 2m min ij E(G) (x i x j ) 2 : π i x i = 0, i π i x 2 i = 1. i Proof. Let y be the unit eigenvector of the symmetrized transition matrix P, belonging to the eigenvalue λ 2, and define x i = y i / π i. the vector y is orthogonal to the eigenvector ( π i : i V ) belonging to the eigenvalue 1, and hence πi y i = 0, i π i x i = i and π i x 2 i = i i y 2 i = 1. Furthermore, 1 2m ij E(G) (x i x j ) 2 = 1 d(i)x 2 i 1 x i x j = π i x 2 i y T P y = 1 λ2. 2m m i V ij E(G) i V It follows ba a similar computation that this choice of x minimizes the right hand side. The second lemma can be considered as a linearized version of the theorem. Lemma 3.10 Let G be a graph with conductance Φ. Let y R V and assume that π({i : y i > 0}) 1/2, π({i : y i < 0}) 1/2 and i π(i) y i = 1. Then 1 m y i y j Φ. ij E Proof. Label the points by 1,..., n so that y 1 y t < 0 = y t+1 = = y s < y s+1... y n. Set S i = {1,..., i}. Substituting y j y i = (y i+1 y i ) + + (y j y j 1 ), we get n 1 n 1 y i y j = (S i ) (y i+1 y i ) 2mΦ (y i+1 y i )π(s i )π(v \ S i ). ij E i=1 i=1 26

27 Using that π(s i ) 1/2 for i t, π(s i ) 1/2 for i s + 1, and that y i+1 y i = 0 for t < i < s, we obtain y i y j mφ ij E t (y i+1 y i )π(s i ) + mφ i=1 = mφ i π(i) y i = mφ. n 1 i=t+1 (y i+1 y i )π(v \ S i ) Proof of Theorem 3.8. We prove the upper bound first. By Lemma 3.9, it suffices to exhibit a vector x R V such that π i x i = 0, π i x 2 i = 1 (24) and i 1 2m ij E(G) i (x i x j ) 2 = Φ. (25) Let S be the minimizer in the definition of the conductance, and consider a vector of the type x i = { a, if i S, b, if i V \ S. Then the conditions are π(s)a + π(v \ S)b = 0, π(s)a 2 + π(v \ S)b 2 = 1. Solving these equations for a and b, we get π(v \ S) a = 2mπ(S), b = π(s) 2mπ(V \ S), and then straightforward substitution shows that (25) is satisfied as well. x R V To prove the lower bound, we again invoke Lemma 3.9: we prove that for every vector satisfying (24), we have ij E(G) (x i x j ) 2 Φ2 8. (26) Let x be any vector satisfying (24). We may assume that x 1 x 2... x n. Let k (1 k n) be the index for which π({1,..., k 1}) 1/2 and π({k + 1,..., n}) < 1/2. Setting z i = max{0, x i x k } and choosing the sign of x appropriately, we may assume that π(i)zi 2 1 π(i)(x i x k ) 2 = 1 π(i)x 2 i x k π(i)x i + m x2 k i i = m 2 x2 k 1 2. i i 27

28 Now Lemma 3.10 can be applied to the numbers y i = zi 2/ i π(i)z2 i, and we obtain that zi 2 zj 2 mφ π(i)zi 2. i ij E On the other hand, using the Cauchy-Schwartz inequality, zi 2 zj 2 (z i z j ) 2 ij E ij E It is easy t see that (z i z j ) 2 (x i x j ) 2, ij E ij E 1/2 (z i + z j ) 2 ij E 1/2. while the second factor can be estimated as follows: (z i + z j ) 2 2 i ij E ij E(z 2 + zj 2 ) = 2 d(i)zi 2 = 4m i V i π(i)z 2 i. Combining these inequalities, we obtain (x i x j ) 2 (z i z j ) 2 2 / zi 2 zj 2 (z i + z j ) 2 ij E ij E Φ 2 m 2 ( i π(i)z 2 i ij E ) 2 / 4m i π(i)z 2 i ij E = Φ2 m 4 Dividing by 2m, the theorem follows. i π(i)z 2 i Φ2 m 8. Corollary 3.11 For any starting distribution σ, any subset A V and any t 0, d var (σ t, π) 1 ( ) t 1 Φ2. πmin 16 To estimate the conductance, the following lemma is often very useful. Lemma 3.12 Let F be a multiset of paths in G and suppose that for every pair i j of nodes there are π i π j N paths in the family connecting them for some N 1. Suppose that each edge of G belongs to at most K of these paths. Then Φ N/(2mK). Proof. Let A V, A. Let F A denote the subfamily of paths in F with one endpoint in A and one endpoint in V \ A. Then F A = π i π j N = π(a)π(v \ A)N. i A j V \A 28

29 On the other hand, every path in F A contains at least one edge connecting A to V \ A. So if e(a, V \ A) denotes the number of such edges, then Hence F A Ke(A, V \ A). Φ e(a, V \ A) 2mπ(A)π(V \ A) F A 2mKπ(A)π(V \ A) = N 2mK. Corollary 3.13 Let G be a graph with a node- and edge-transitive automorphism group and diameter d 1.Then Φ > 1/d. Proof. Let us select a shortest path P i j between every pair of nodes {i, j}. Every node has π i = 1/n. Let F consist of these paths and their images under all automorphisms. So F = g ( n 2), where g is the number of automorphisms, and every pair of nodes is connected by g paths. The total length of these paths is at most dg ( n 2). By symmetry, every edge is covered by the same number of paths, which is at most dg ( n 2) /m. So we can apply Lemma 3.12 with N = n 2 g and K = dg ( n 2) /m, to get that Φ N 2mK = n2 2d ( n 2 ) > 1 d. Example 3.14 For the k-cube Q k, the last corollary gives that Φ > 1/k, and so the eigenvalue gap is between 1/(8k 2 ) and 1/k. We know that the eigenvalue gap is in fact 1/k. Exercise 3.15 Prove that d var(σ, τ) satisfies the triangle inequality. Exercise 3.16 Prove that d var(σ, τ) is convex in both arguments. Exercise 3.17 Prove that the ε-mixing time for the random walk on a k k grid is at least cn 2 and at most Cn 2, where c and C are constants depending on ε only. Exercise 3.18 A coupling of two random walks u 0, u 1,... and v 0, v 1,... is called Markovian, if the sequence of pairs (u i, v i) forms a Markov chain, i.e., given u i and v i, the distribution of the pair (u i+1, v i+1) does not depend on the prehistory. (All couplings we constructed in the lecture had this property.) Let T be the (random) first time when u T = v T. Let s = E(T). Prove that Var(T) 8T 2. Exercise 3.19 Let us start a lazy random walk on a graph from node v, and let σ t denote the distribution after t steps. Prove that σ t v is monotone decreasing as a function of t. 29

1 Random Walks and Electrical Networks

1 Random Walks and Electrical Networks CME 305: Discrete Mathematics and Algorithms Random Walks and Electrical Networks Random walks are widely used tools in algorithm design and probabilistic analysis and they have numerous applications.

More information

Markov Chains, Random Walks on Graphs, and the Laplacian

Markov Chains, Random Walks on Graphs, and the Laplacian Markov Chains, Random Walks on Graphs, and the Laplacian CMPSCI 791BB: Advanced ML Sridhar Mahadevan Random Walks! There is significant interest in the problem of random walks! Markov chain analysis! Computer

More information

Random Walks on Graphs. One Concrete Example of a random walk Motivation applications

Random Walks on Graphs. One Concrete Example of a random walk Motivation applications Random Walks on Graphs Outline One Concrete Example of a random walk Motivation applications shuffling cards universal traverse sequence self stabilizing token management scheme random sampling enumeration

More information

EECS 495: Randomized Algorithms Lecture 14 Random Walks. j p ij = 1. Pr[X t+1 = j X 0 = i 0,..., X t 1 = i t 1, X t = i] = Pr[X t+

EECS 495: Randomized Algorithms Lecture 14 Random Walks. j p ij = 1. Pr[X t+1 = j X 0 = i 0,..., X t 1 = i t 1, X t = i] = Pr[X t+ EECS 495: Randomized Algorithms Lecture 14 Random Walks Reading: Motwani-Raghavan Chapter 6 Powerful tool for sampling complicated distributions since use only local moves Given: to explore state space.

More information

Probability & Computing

Probability & Computing Probability & Computing Stochastic Process time t {X t t 2 T } state space Ω X t 2 state x 2 discrete time: T is countable T = {0,, 2,...} discrete space: Ω is finite or countably infinite X 0,X,X 2,...

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

(K2) For sum of the potential differences around any cycle is equal to zero. r uv resistance of {u, v}. [In other words, V ir.]

(K2) For sum of the potential differences around any cycle is equal to zero. r uv resistance of {u, v}. [In other words, V ir.] Lectures 16-17: Random walks on graphs CSE 525, Winter 2015 Instructor: James R. Lee 1 Random walks Let G V, E) be an undirected graph. The random walk on G is a Markov chain on V that, at each time step,

More information

WAITING FOR A BAT TO FLY BY (IN POLYNOMIAL TIME)

WAITING FOR A BAT TO FLY BY (IN POLYNOMIAL TIME) WAITING FOR A BAT TO FLY BY (IN POLYNOMIAL TIME ITAI BENJAMINI, GADY KOZMA, LÁSZLÓ LOVÁSZ, DAN ROMIK, AND GÁBOR TARDOS Abstract. We observe returns of a simple random wal on a finite graph to a fixed node,

More information

Definition A finite Markov chain is a memoryless homogeneous discrete stochastic process with a finite number of states.

Definition A finite Markov chain is a memoryless homogeneous discrete stochastic process with a finite number of states. Chapter 8 Finite Markov Chains A discrete system is characterized by a set V of states and transitions between the states. V is referred to as the state space. We think of the transitions as occurring

More information

25.1 Markov Chain Monte Carlo (MCMC)

25.1 Markov Chain Monte Carlo (MCMC) CS880: Approximations Algorithms Scribe: Dave Andrzejewski Lecturer: Shuchi Chawla Topic: Approx counting/sampling, MCMC methods Date: 4/4/07 The previous lecture showed that, for self-reducible problems,

More information

Markov Chains. Andreas Klappenecker by Andreas Klappenecker. All rights reserved. Texas A&M University

Markov Chains. Andreas Klappenecker by Andreas Klappenecker. All rights reserved. Texas A&M University Markov Chains Andreas Klappenecker Texas A&M University 208 by Andreas Klappenecker. All rights reserved. / 58 Stochastic Processes A stochastic process X tx ptq: t P T u is a collection of random variables.

More information

Powerful tool for sampling from complicated distributions. Many use Markov chains to model events that arise in nature.

Powerful tool for sampling from complicated distributions. Many use Markov chains to model events that arise in nature. Markov Chains Markov chains: 2SAT: Powerful tool for sampling from complicated distributions rely only on local moves to explore state space. Many use Markov chains to model events that arise in nature.

More information

The Markov Chain Monte Carlo Method

The Markov Chain Monte Carlo Method The Markov Chain Monte Carlo Method Idea: define an ergodic Markov chain whose stationary distribution is the desired probability distribution. Let X 0, X 1, X 2,..., X n be the run of the chain. The Markov

More information

Lecture 9 Classification of States

Lecture 9 Classification of States Lecture 9: Classification of States of 27 Course: M32K Intro to Stochastic Processes Term: Fall 204 Instructor: Gordan Zitkovic Lecture 9 Classification of States There will be a lot of definitions and

More information

Using Laplacian Eigenvalues and Eigenvectors in the Analysis of Frequency Assignment Problems

Using Laplacian Eigenvalues and Eigenvectors in the Analysis of Frequency Assignment Problems Using Laplacian Eigenvalues and Eigenvectors in the Analysis of Frequency Assignment Problems Jan van den Heuvel and Snežana Pejić Department of Mathematics London School of Economics Houghton Street,

More information

Physical Metaphors for Graphs

Physical Metaphors for Graphs Graphs and Networks Lecture 3 Physical Metaphors for Graphs Daniel A. Spielman October 9, 203 3. Overview We will examine physical metaphors for graphs. We begin with a spring model and then discuss resistor

More information

1.3 Convergence of Regular Markov Chains

1.3 Convergence of Regular Markov Chains Markov Chains and Random Walks on Graphs 3 Applying the same argument to A T, which has the same λ 0 as A, yields the row sum bounds Corollary 0 Let P 0 be the transition matrix of a regular Markov chain

More information

Walks, Springs, and Resistor Networks

Walks, Springs, and Resistor Networks Spectral Graph Theory Lecture 12 Walks, Springs, and Resistor Networks Daniel A. Spielman October 8, 2018 12.1 Overview In this lecture we will see how the analysis of random walks, spring networks, and

More information

Centers for random walks on trees

Centers for random walks on trees Centers for random walks on trees Andrew Beveridge Department of Mathematical Sciences Carnegie Mellon University Pittsburgh, PA 15213 Abstract We consider two distinct centers which arise in measuring

More information

Lecture 2: September 8

Lecture 2: September 8 CS294 Markov Chain Monte Carlo: Foundations & Applications Fall 2009 Lecture 2: September 8 Lecturer: Prof. Alistair Sinclair Scribes: Anand Bhaskar and Anindya De Disclaimer: These notes have not been

More information

Markov Chains Handout for Stat 110

Markov Chains Handout for Stat 110 Markov Chains Handout for Stat 0 Prof. Joe Blitzstein (Harvard Statistics Department) Introduction Markov chains were first introduced in 906 by Andrey Markov, with the goal of showing that the Law of

More information

Model Counting for Logical Theories

Model Counting for Logical Theories Model Counting for Logical Theories Wednesday Dmitry Chistikov Rayna Dimitrova Department of Computer Science University of Oxford, UK Max Planck Institute for Software Systems (MPI-SWS) Kaiserslautern

More information

Detailed Proof of The PerronFrobenius Theorem

Detailed Proof of The PerronFrobenius Theorem Detailed Proof of The PerronFrobenius Theorem Arseny M Shur Ural Federal University October 30, 2016 1 Introduction This famous theorem has numerous applications, but to apply it you should understand

More information

DISTINGUISHING PARTITIONS AND ASYMMETRIC UNIFORM HYPERGRAPHS

DISTINGUISHING PARTITIONS AND ASYMMETRIC UNIFORM HYPERGRAPHS DISTINGUISHING PARTITIONS AND ASYMMETRIC UNIFORM HYPERGRAPHS M. N. ELLINGHAM AND JUSTIN Z. SCHROEDER In memory of Mike Albertson. Abstract. A distinguishing partition for an action of a group Γ on a set

More information

On random walks. i=1 U i, where x Z is the starting point of

On random walks. i=1 U i, where x Z is the starting point of On random walks Random walk in dimension. Let S n = x+ n i= U i, where x Z is the starting point of the random walk, and the U i s are IID with P(U i = +) = P(U n = ) = /2.. Let N be fixed (goal you want

More information

Lecture 20 : Markov Chains

Lecture 20 : Markov Chains CSCI 3560 Probability and Computing Instructor: Bogdan Chlebus Lecture 0 : Markov Chains We consider stochastic processes. A process represents a system that evolves through incremental changes called

More information

CSCE 750 Final Exam Answer Key Wednesday December 7, 2005

CSCE 750 Final Exam Answer Key Wednesday December 7, 2005 CSCE 750 Final Exam Answer Key Wednesday December 7, 2005 Do all problems. Put your answers on blank paper or in a test booklet. There are 00 points total in the exam. You have 80 minutes. Please note

More information

A = A U. U [n] P(A U ). n 1. 2 k(n k). k. k=1

A = A U. U [n] P(A U ). n 1. 2 k(n k). k. k=1 Lecture I jacques@ucsd.edu Notation: Throughout, P denotes probability and E denotes expectation. Denote (X) (r) = X(X 1)... (X r + 1) and let G n,p denote the Erdős-Rényi model of random graphs. 10 Random

More information

Lecture Introduction. 2 Brief Recap of Lecture 10. CS-621 Theory Gems October 24, 2012

Lecture Introduction. 2 Brief Recap of Lecture 10. CS-621 Theory Gems October 24, 2012 CS-62 Theory Gems October 24, 202 Lecture Lecturer: Aleksander Mądry Scribes: Carsten Moldenhauer and Robin Scheibler Introduction In Lecture 0, we introduced a fundamental object of spectral graph theory:

More information

1 Adjacency matrix and eigenvalues

1 Adjacency matrix and eigenvalues CSC 5170: Theory of Computational Complexity Lecture 7 The Chinese University of Hong Kong 1 March 2010 Our objective of study today is the random walk algorithm for deciding if two vertices in an undirected

More information

Eigenvalues, random walks and Ramanujan graphs

Eigenvalues, random walks and Ramanujan graphs Eigenvalues, random walks and Ramanujan graphs David Ellis 1 The Expander Mixing lemma We have seen that a bounded-degree graph is a good edge-expander if and only if if has large spectral gap If G = (V,

More information

Notes 6 : First and second moment methods

Notes 6 : First and second moment methods Notes 6 : First and second moment methods Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Roc, Sections 2.1-2.3]. Recall: THM 6.1 (Markov s inequality) Let X be a non-negative

More information

ORIE 6334 Spectral Graph Theory September 22, Lecture 11

ORIE 6334 Spectral Graph Theory September 22, Lecture 11 ORIE 6334 Spectral Graph Theory September, 06 Lecturer: David P. Williamson Lecture Scribe: Pu Yang In today s lecture we will focus on discrete time random walks on undirected graphs. Specifically, we

More information

Preliminaries and Complexity Theory

Preliminaries and Complexity Theory Preliminaries and Complexity Theory Oleksandr Romanko CAS 746 - Advanced Topics in Combinatorial Optimization McMaster University, January 16, 2006 Introduction Book structure: 2 Part I Linear Algebra

More information

Laplacian Integral Graphs with Maximum Degree 3

Laplacian Integral Graphs with Maximum Degree 3 Laplacian Integral Graphs with Maximum Degree Steve Kirkland Department of Mathematics and Statistics University of Regina Regina, Saskatchewan, Canada S4S 0A kirkland@math.uregina.ca Submitted: Nov 5,

More information

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i := 2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]

More information

Rao s degree sequence conjecture

Rao s degree sequence conjecture Rao s degree sequence conjecture Maria Chudnovsky 1 Columbia University, New York, NY 10027 Paul Seymour 2 Princeton University, Princeton, NJ 08544 July 31, 2009; revised December 10, 2013 1 Supported

More information

ON COST MATRICES WITH TWO AND THREE DISTINCT VALUES OF HAMILTONIAN PATHS AND CYCLES

ON COST MATRICES WITH TWO AND THREE DISTINCT VALUES OF HAMILTONIAN PATHS AND CYCLES ON COST MATRICES WITH TWO AND THREE DISTINCT VALUES OF HAMILTONIAN PATHS AND CYCLES SANTOSH N. KABADI AND ABRAHAM P. PUNNEN Abstract. Polynomially testable characterization of cost matrices associated

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

Chapter 6 Orthogonal representations II: Minimal dimension

Chapter 6 Orthogonal representations II: Minimal dimension Chapter 6 Orthogonal representations II: Minimal dimension Nachdiplomvorlesung by László Lovász ETH Zürich, Spring 2014 1 Minimum dimension Perhaps the most natural way to be economic in constructing an

More information

Lecture: Modeling graphs with electrical networks

Lecture: Modeling graphs with electrical networks Stat260/CS294: Spectral Graph Methods Lecture 16-03/17/2015 Lecture: Modeling graphs with electrical networks Lecturer: Michael Mahoney Scribe: Michael Mahoney Warning: these notes are still very rough.

More information

Discrete Applied Mathematics

Discrete Applied Mathematics Discrete Applied Mathematics 194 (015) 37 59 Contents lists available at ScienceDirect Discrete Applied Mathematics journal homepage: wwwelseviercom/locate/dam Loopy, Hankel, and combinatorially skew-hankel

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1 MATH 56A: STOCHASTIC PROCESSES CHAPTER. Finite Markov chains For the sake of completeness of these notes I decided to write a summary of the basic concepts of finite Markov chains. The topics in this chapter

More information

2.1 Laplacian Variants

2.1 Laplacian Variants -3 MS&E 337: Spectral Graph heory and Algorithmic Applications Spring 2015 Lecturer: Prof. Amin Saberi Lecture 2-3: 4/7/2015 Scribe: Simon Anastasiadis and Nolan Skochdopole Disclaimer: hese notes have

More information

Laplacian eigenvalues and optimality: II. The Laplacian of a graph. R. A. Bailey and Peter Cameron

Laplacian eigenvalues and optimality: II. The Laplacian of a graph. R. A. Bailey and Peter Cameron Laplacian eigenvalues and optimality: II. The Laplacian of a graph R. A. Bailey and Peter Cameron London Taught Course Centre, June 2012 The Laplacian of a graph This lecture will be about the Laplacian

More information

Lecture 13: Spectral Graph Theory

Lecture 13: Spectral Graph Theory CSE 521: Design and Analysis of Algorithms I Winter 2017 Lecture 13: Spectral Graph Theory Lecturer: Shayan Oveis Gharan 11/14/18 Disclaimer: These notes have not been subjected to the usual scrutiny reserved

More information

Multi-coloring and Mycielski s construction

Multi-coloring and Mycielski s construction Multi-coloring and Mycielski s construction Tim Meagher Fall 2010 Abstract We consider a number of related results taken from two papers one by W. Lin [1], and the other D. C. Fisher[2]. These articles

More information

Lecture 14: Random Walks, Local Graph Clustering, Linear Programming

Lecture 14: Random Walks, Local Graph Clustering, Linear Programming CSE 521: Design and Analysis of Algorithms I Winter 2017 Lecture 14: Random Walks, Local Graph Clustering, Linear Programming Lecturer: Shayan Oveis Gharan 3/01/17 Scribe: Laura Vonessen Disclaimer: These

More information

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman Kernels of Directed Graph Laplacians J. S. Caughman and J.J.P. Veerman Department of Mathematics and Statistics Portland State University PO Box 751, Portland, OR 97207. caughman@pdx.edu, veerman@pdx.edu

More information

Data Mining and Analysis: Fundamental Concepts and Algorithms

Data Mining and Analysis: Fundamental Concepts and Algorithms Data Mining and Analysis: Fundamental Concepts and Algorithms dataminingbook.info Mohammed J. Zaki 1 Wagner Meira Jr. 2 1 Department of Computer Science Rensselaer Polytechnic Institute, Troy, NY, USA

More information

Lecture 3: graph theory

Lecture 3: graph theory CONTENTS 1 BASIC NOTIONS Lecture 3: graph theory Sonia Martínez October 15, 2014 Abstract The notion of graph is at the core of cooperative control. Essentially, it allows us to model the interaction topology

More information

Graph coloring, perfect graphs

Graph coloring, perfect graphs Lecture 5 (05.04.2013) Graph coloring, perfect graphs Scribe: Tomasz Kociumaka Lecturer: Marcin Pilipczuk 1 Introduction to graph coloring Definition 1. Let G be a simple undirected graph and k a positive

More information

Lecture 5: Random Walks and Markov Chain

Lecture 5: Random Walks and Markov Chain Spectral Graph Theory and Applications WS 20/202 Lecture 5: Random Walks and Markov Chain Lecturer: Thomas Sauerwald & He Sun Introduction to Markov Chains Definition 5.. A sequence of random variables

More information

Topics in Graph Theory

Topics in Graph Theory Topics in Graph Theory September 4, 2018 1 Preliminaries A graph is a system G = (V, E) consisting of a set V of vertices and a set E (disjoint from V ) of edges, together with an incidence function End

More information

LIMITING PROBABILITY TRANSITION MATRIX OF A CONDENSED FIBONACCI TREE

LIMITING PROBABILITY TRANSITION MATRIX OF A CONDENSED FIBONACCI TREE International Journal of Applied Mathematics Volume 31 No. 18, 41-49 ISSN: 1311-178 (printed version); ISSN: 1314-86 (on-line version) doi: http://dx.doi.org/1.173/ijam.v31i.6 LIMITING PROBABILITY TRANSITION

More information

Algebraic Methods in Combinatorics

Algebraic Methods in Combinatorics Algebraic Methods in Combinatorics Po-Shen Loh 27 June 2008 1 Warm-up 1. (A result of Bourbaki on finite geometries, from Răzvan) Let X be a finite set, and let F be a family of distinct proper subsets

More information

Coupling AMS Short Course

Coupling AMS Short Course Coupling AMS Short Course January 2010 Distance If µ and ν are two probability distributions on a set Ω, then the total variation distance between µ and ν is Example. Let Ω = {0, 1}, and set Then d TV

More information

< k 2n. 2 1 (n 2). + (1 p) s) N (n < 1

< k 2n. 2 1 (n 2). + (1 p) s) N (n < 1 List of Problems jacques@ucsd.edu Those question with a star next to them are considered slightly more challenging. Problems 9, 11, and 19 from the book The probabilistic method, by Alon and Spencer. Question

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

University of Chicago Autumn 2003 CS Markov Chain Monte Carlo Methods

University of Chicago Autumn 2003 CS Markov Chain Monte Carlo Methods University of Chicago Autumn 2003 CS37101-1 Markov Chain Monte Carlo Methods Lecture 4: October 21, 2003 Bounding the mixing time via coupling Eric Vigoda 4.1 Introduction In this lecture we ll use the

More information

Extremal H-colorings of graphs with fixed minimum degree

Extremal H-colorings of graphs with fixed minimum degree Extremal H-colorings of graphs with fixed minimum degree John Engbers July 18, 2014 Abstract For graphs G and H, a homomorphism from G to H, or H-coloring of G, is a map from the vertices of G to the vertices

More information

Review of Random Walks on Graphs: A Survey [1] with Applications to Thermodynamics And a Digression into the Nature of Harmonicity

Review of Random Walks on Graphs: A Survey [1] with Applications to Thermodynamics And a Digression into the Nature of Harmonicity Math 336: Accelerated Advanced Honors Calculus James Morrow Spencer Peters 5/20/16 Review of Random Walks on Graphs: A Survey [1] with Applications to Thermodynamics And a Digression into the Nature of

More information

Discrete Mathematics

Discrete Mathematics Discrete Mathematics Workshop Organized by: ACM Unit, ISI Tutorial-1 Date: 05.07.2017 (Q1) Given seven points in a triangle of unit area, prove that three of them form a triangle of area not exceeding

More information

Modeling and Stability Analysis of a Communication Network System

Modeling and Stability Analysis of a Communication Network System Modeling and Stability Analysis of a Communication Network System Zvi Retchkiman Königsberg Instituto Politecnico Nacional e-mail: mzvi@cic.ipn.mx Abstract In this work, the modeling and stability problem

More information

Introduction to Stochastic Processes

Introduction to Stochastic Processes 18.445 Introduction to Stochastic Processes Lecture 1: Introduction to finite Markov chains Hao Wu MIT 04 February 2015 Hao Wu (MIT) 18.445 04 February 2015 1 / 15 Course description About this course

More information

BOUNDARY VALUE PROBLEMS ON A HALF SIERPINSKI GASKET

BOUNDARY VALUE PROBLEMS ON A HALF SIERPINSKI GASKET BOUNDARY VALUE PROBLEMS ON A HALF SIERPINSKI GASKET WEILIN LI AND ROBERT S. STRICHARTZ Abstract. We study boundary value problems for the Laplacian on a domain Ω consisting of the left half of the Sierpinski

More information

6.207/14.15: Networks Lectures 4, 5 & 6: Linear Dynamics, Markov Chains, Centralities

6.207/14.15: Networks Lectures 4, 5 & 6: Linear Dynamics, Markov Chains, Centralities 6.207/14.15: Networks Lectures 4, 5 & 6: Linear Dynamics, Markov Chains, Centralities 1 Outline Outline Dynamical systems. Linear and Non-linear. Convergence. Linear algebra and Lyapunov functions. Markov

More information

Root systems and optimal block designs

Root systems and optimal block designs Root systems and optimal block designs Peter J. Cameron School of Mathematical Sciences Queen Mary, University of London Mile End Road London E1 4NS, UK p.j.cameron@qmul.ac.uk Abstract Motivated by a question

More information

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property Chapter 1: and Markov chains Stochastic processes We study stochastic processes, which are families of random variables describing the evolution of a quantity with time. In some situations, we can treat

More information

Characterization of cutoff for reversible Markov chains

Characterization of cutoff for reversible Markov chains Characterization of cutoff for reversible Markov chains Yuval Peres Joint work with Riddhi Basu and Jonathan Hermon 23 Feb 2015 Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff

More information

Out-colourings of Digraphs

Out-colourings of Digraphs Out-colourings of Digraphs N. Alon J. Bang-Jensen S. Bessy July 13, 2017 Abstract We study vertex colourings of digraphs so that no out-neighbourhood is monochromatic and call such a colouring an out-colouring.

More information

Decomposing oriented graphs into transitive tournaments

Decomposing oriented graphs into transitive tournaments Decomposing oriented graphs into transitive tournaments Raphael Yuster Department of Mathematics University of Haifa Haifa 39105, Israel Abstract For an oriented graph G with n vertices, let f(g) denote

More information

Lecture 8: Path Technology

Lecture 8: Path Technology Counting and Sampling Fall 07 Lecture 8: Path Technology Lecturer: Shayan Oveis Gharan October 0 Disclaimer: These notes have not been subjected to the usual scrutiny reserved for formal publications.

More information

. Find E(V ) and var(v ).

. Find E(V ) and var(v ). Math 6382/6383: Probability Models and Mathematical Statistics Sample Preliminary Exam Questions 1. A person tosses a fair coin until she obtains 2 heads in a row. She then tosses a fair die the same number

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

5 Flows and cuts in digraphs

5 Flows and cuts in digraphs 5 Flows and cuts in digraphs Recall that a digraph or network is a pair G = (V, E) where V is a set and E is a multiset of ordered pairs of elements of V, which we refer to as arcs. Note that two vertices

More information

Connectivity of addable graph classes

Connectivity of addable graph classes Connectivity of addable graph classes Paul Balister Béla Bollobás Stefanie Gerke July 6, 008 A non-empty class A of labelled graphs is weakly addable if for each graph G A and any two distinct components

More information

Boolean Inner-Product Spaces and Boolean Matrices

Boolean Inner-Product Spaces and Boolean Matrices Boolean Inner-Product Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver

More information

Bulletin of the Iranian Mathematical Society

Bulletin of the Iranian Mathematical Society ISSN: 117-6X (Print) ISSN: 1735-8515 (Online) Bulletin of the Iranian Mathematical Society Vol. 4 (14), No. 6, pp. 1491 154. Title: The locating chromatic number of the join of graphs Author(s): A. Behtoei

More information

ON THE NUMBER OF COMPONENTS OF A GRAPH

ON THE NUMBER OF COMPONENTS OF A GRAPH Volume 5, Number 1, Pages 34 58 ISSN 1715-0868 ON THE NUMBER OF COMPONENTS OF A GRAPH HAMZA SI KADDOUR AND ELIAS TAHHAN BITTAR Abstract. Let G := (V, E be a simple graph; for I V we denote by l(i the number

More information

Strongly chordal and chordal bipartite graphs are sandwich monotone

Strongly chordal and chordal bipartite graphs are sandwich monotone Strongly chordal and chordal bipartite graphs are sandwich monotone Pinar Heggernes Federico Mancini Charis Papadopoulos R. Sritharan Abstract A graph class is sandwich monotone if, for every pair of its

More information

Electrical networks and Markov chains

Electrical networks and Markov chains A.A. Peters Electrical networks and Markov chains Classical results and beyond Masterthesis Date master exam: 04-07-016 Supervisors: Dr. L. Avena & Dr. S. Taati Mathematisch Instituut, Universiteit Leiden

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Instructor: Farid Alizadeh Scribe: Anton Riabov 10/08/2001 1 Overview We continue studying the maximum eigenvalue SDP, and generalize

More information

Approximate Counting and Markov Chain Monte Carlo

Approximate Counting and Markov Chain Monte Carlo Approximate Counting and Markov Chain Monte Carlo A Randomized Approach Arindam Pal Department of Computer Science and Engineering Indian Institute of Technology Delhi March 18, 2011 April 8, 2011 Arindam

More information

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < +

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < + Random Walks: WEEK 2 Recurrence and transience Consider the event {X n = i for some n > 0} by which we mean {X = i}or{x 2 = i,x i}or{x 3 = i,x 2 i,x i},. Definition.. A state i S is recurrent if P(X n

More information

MATH 117 LECTURE NOTES

MATH 117 LECTURE NOTES MATH 117 LECTURE NOTES XIN ZHOU Abstract. This is the set of lecture notes for Math 117 during Fall quarter of 2017 at UC Santa Barbara. The lectures follow closely the textbook [1]. Contents 1. The set

More information

MINIMALLY NON-PFAFFIAN GRAPHS

MINIMALLY NON-PFAFFIAN GRAPHS MINIMALLY NON-PFAFFIAN GRAPHS SERGUEI NORINE AND ROBIN THOMAS Abstract. We consider the question of characterizing Pfaffian graphs. We exhibit an infinite family of non-pfaffian graphs minimal with respect

More information

K 4 -free graphs with no odd holes

K 4 -free graphs with no odd holes K 4 -free graphs with no odd holes Maria Chudnovsky 1 Columbia University, New York NY 10027 Neil Robertson 2 Ohio State University, Columbus, Ohio 43210 Paul Seymour 3 Princeton University, Princeton

More information

Markov Chains and Stochastic Sampling

Markov Chains and Stochastic Sampling Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,

More information

On the number of cycles in a graph with restricted cycle lengths

On the number of cycles in a graph with restricted cycle lengths On the number of cycles in a graph with restricted cycle lengths Dániel Gerbner, Balázs Keszegh, Cory Palmer, Balázs Patkós arxiv:1610.03476v1 [math.co] 11 Oct 2016 October 12, 2016 Abstract Let L be a

More information

ON THE QUALITY OF SPECTRAL SEPARATORS

ON THE QUALITY OF SPECTRAL SEPARATORS ON THE QUALITY OF SPECTRAL SEPARATORS STEPHEN GUATTERY AND GARY L. MILLER Abstract. Computing graph separators is an important step in many graph algorithms. A popular technique for finding separators

More information

THE MAXIMAL SUBGROUPS AND THE COMPLEXITY OF THE FLOW SEMIGROUP OF FINITE (DI)GRAPHS

THE MAXIMAL SUBGROUPS AND THE COMPLEXITY OF THE FLOW SEMIGROUP OF FINITE (DI)GRAPHS THE MAXIMAL SUBGROUPS AND THE COMPLEXITY OF THE FLOW SEMIGROUP OF FINITE (DI)GRAPHS GÁBOR HORVÁTH, CHRYSTOPHER L. NEHANIV, AND KÁROLY PODOSKI Dedicated to John Rhodes on the occasion of his 80th birthday.

More information

Graph G = (V, E). V ={vertices}, E={edges}. V={a,b,c,d,e,f,g,h,k} E={(a,b),(a,g),( a,h),(a,k),(b,c),(b,k),...,(h,k)}

Graph G = (V, E). V ={vertices}, E={edges}. V={a,b,c,d,e,f,g,h,k} E={(a,b),(a,g),( a,h),(a,k),(b,c),(b,k),...,(h,k)} Graph Theory Graph G = (V, E). V ={vertices}, E={edges}. a b c h k d g f e V={a,b,c,d,e,f,g,h,k} E={(a,b),(a,g),( a,h),(a,k),(b,c),(b,k),...,(h,k)} E =16. Digraph D = (V, A). V ={vertices}, E={edges}.

More information

Coupling. 2/3/2010 and 2/5/2010

Coupling. 2/3/2010 and 2/5/2010 Coupling 2/3/2010 and 2/5/2010 1 Introduction Consider the move to middle shuffle where a card from the top is placed uniformly at random at a position in the deck. It is easy to see that this Markov Chain

More information

A lower bound for the Laplacian eigenvalues of a graph proof of a conjecture by Guo

A lower bound for the Laplacian eigenvalues of a graph proof of a conjecture by Guo A lower bound for the Laplacian eigenvalues of a graph proof of a conjecture by Guo A. E. Brouwer & W. H. Haemers 2008-02-28 Abstract We show that if µ j is the j-th largest Laplacian eigenvalue, and d

More information

Bichain graphs: geometric model and universal graphs

Bichain graphs: geometric model and universal graphs Bichain graphs: geometric model and universal graphs Robert Brignall a,1, Vadim V. Lozin b,, Juraj Stacho b, a Department of Mathematics and Statistics, The Open University, Milton Keynes MK7 6AA, United

More information

Spectral Theory of Unsigned and Signed Graphs Applications to Graph Clustering. Some Slides

Spectral Theory of Unsigned and Signed Graphs Applications to Graph Clustering. Some Slides Spectral Theory of Unsigned and Signed Graphs Applications to Graph Clustering Some Slides Jean Gallier Department of Computer and Information Science University of Pennsylvania Philadelphia, PA 19104,

More information

Advanced Topics in Discrete Math: Graph Theory Fall 2010

Advanced Topics in Discrete Math: Graph Theory Fall 2010 21-801 Advanced Topics in Discrete Math: Graph Theory Fall 2010 Prof. Andrzej Dudek notes by Brendan Sullivan October 18, 2010 Contents 0 Introduction 1 1 Matchings 1 1.1 Matchings in Bipartite Graphs...................................

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9 MAT 570 REAL ANALYSIS LECTURE NOTES PROFESSOR: JOHN QUIGG SEMESTER: FALL 204 Contents. Sets 2 2. Functions 5 3. Countability 7 4. Axiom of choice 8 5. Equivalence relations 9 6. Real numbers 9 7. Extended

More information