Approximate Counting and Markov Chain Monte Carlo

Size: px
Start display at page:

Download "Approximate Counting and Markov Chain Monte Carlo"

Transcription

1 Approximate Counting and Markov Chain Monte Carlo A Randomized Approach Arindam Pal Department of Computer Science and Engineering Indian Institute of Technology Delhi March 18, 2011 April 8, 2011 Arindam Pal (IIT Delhi) Approximate Counting and MCMC April 8, / 45

2 Agenda Introduction to Monte Carlo Method Approximating the Value of π The Complexity Classes #P and #P -complete Fully Polynomial Randomized Approximation Scheme DNF Counting Problem Introduction to Markov Chains Random Walks on Graphs Markov Chain Monte Carlo Method Counting the Number of Knapsack Solutions Rapidly Mixing Markov Chains Mixing Time, Canonical Paths and Conductance Arindam Pal (IIT Delhi) Approximate Counting and MCMC April 8, / 45

3 The Monte Carlo Method Uses random sampling to estimate the value of a quantity. Primarily used in numerical analysis and simulation. Widely used in Physical sciences, Engineering, Computational Biology and Statistics among many others. A classic example is to estimate the value of π. Other examples include evaluating a definite integral (area under a curve), solving optimization problems using random walks. Simulated annealing and the Metropolis algorithm are good heuristics for approximating the global optimum of a given function in a large search space. Arindam Pal (IIT Delhi) Approximate Counting and MCMC April 8, / 45

4 Algorithms for Approximate Counting Problems DNF counting problem. (Karp and Luby) Network reliability. (Karger) Counting the number of Knapsack solutions. (Dyer, Frieze, Kannan, Kapoor, Perkovic and Vazirani) Approximating the Permanent. (Jerrum and Sinclair) Estimating the volume of a convex body. (Kannan, Lovasz and Simonovits) Arindam Pal (IIT Delhi) Approximate Counting and MCMC April 8, / 45

5 Approximating the Value of π Let S = {(x, y) R 2 : x 1, y 1} be the 2 2 square centered at (0, 0). Let C = {(x, y) R 2 : x 2 + y 2 1} be the unit circle centered at (0, 0). Choose a point (x, y) S uniformly at random. This is equivalent to choosing x and y independently from a uniform distribution on the interval [ 1, 1]. Define a random variable, { 1 if (x, y) C in the i th iteration, Z i = 0 otherwise. Clearly, E[Z i ] = Pr[Z i = 1] = (C) (S) = π 4. Arindam Pal (IIT Delhi) Approximate Counting and MCMC April 8, / 45

6 Analysis Suppose we do this experiment m times independently. Let Z = m i=1 Z i. E[Z] = m i=1 E[Z i] = mπ 4. Let X = ( ( 4 m) Z. Then, E[X] = 4 ) m E[Z] = 4 m mπ 4 = π. Applying Chernoff bound, [ Z mπ Pr[ X π επ] = Pr εmπ ] 4 4 = Pr[ Z E[Z] εe[z]] 2e mπε2 /12. Since the error probability decreases exponentially with the number of trials m, we can get a very good approximation to π with high probability. Arindam Pal (IIT Delhi) Approximate Counting and MCMC April 8, / 45

7 The complexity classes #P and #P -complete Definition A problem Π #P if there is a nondeterministic polynomial time Turing machine that for any instance I of Π has a number of accepting computations exactly equal to the number of distinct solutions to I. Π is #P -complete if any problem Γ #P can be reduced to Π by a polynomial time Turing reduction relating the cardinalities of solution sets. Arindam Pal (IIT Delhi) Approximate Counting and MCMC April 8, / 45

8 Importance of #P -completeness The class #P consists of all counting problems associated with the decision problems in NP. There are easy problems in #P, like counting the number of spanning trees of a graph, which can be solved in polynomial time using Kirchhoff s Matrix Tree Theorem. If a #P -complete problem can be solved in polynomial time, then P = NP. There is no known efficient deterministic approximation algorithm for any #P -complete problem. But there are efficient randomized approximation algorithms for many #P -complete problems. Arindam Pal (IIT Delhi) Approximate Counting and MCMC April 8, / 45

9 Kirchhoff s Matrix Tree Theorem Theorem The number of spanning trees τ(g) of a graph G on n vertices is the absolute value of the determinant of any (n 1) (n 1) submatrix (cofactor) of the Laplacian matrix L(G). Equivalently, if λ 1,..., λ n 1 are the non-zero eigenvalues of L(G), then τ(g) = 1 n n 1 i=1 λ i. L(G) = (G) A(G), where (G) and A(G) are the degree matrix and the adjacency matrix of G respectively. Using this, one can prove Cayley s formula: τ(k n ) = n n 2. Arindam Pal (IIT Delhi) Approximate Counting and MCMC April 8, / 45

10 Fully Polynomial Randomized Approximation Scheme (FPRAS) Definition An (ε, δ)-fully Polynomial Randomized Approximation Scheme (FPRAS) for a problem is a randomized algorithm A which, given an input x and two parameters ε, δ (0 < ε, δ < 1), outputs a value A(x) corresponding to the actual value V (x) such that Pr[ A(x) V (x) εv (x)] 1 δ, in time polynomial in x, 1 ε and log( 1 δ ). Arindam Pal (IIT Delhi) Approximate Counting and MCMC April 8, / 45

11 DNF Counting Problem F is a boolean formula in disjunctive normal form (DNF) over n boolean variables x 1,..., x n. F = C 1... C t is a disjunction of clauses C i. Each clause C i = l 1 l ri of r i literals. Each literal l j is either a variable x k or it s complement x k. #F is the number of distinct satisfying assignments of F. Our goal is to compute #F. Note that 0 #F 2 n. Computing #F is #P -complete. Design an (ε, δ)-fpras with running time polynomial in n, t, 1 ε and log( 1 δ ). Arindam Pal (IIT Delhi) Approximate Counting and MCMC April 8, / 45

12 A Naive Random Sampling Algorithm 1 X 0. 2 for k = 1,..., m 3 for i = 1,..., n 4 set x i 1 with probability end for 6 if this random assignment satisfies F 7 X X end for 9 return Y ( X m) 2 n. Arindam Pal (IIT Delhi) Approximate Counting and MCMC April 8, / 45

13 Analysis of the Naive Algorithm Define a random variable, { 1 if F is satisfied in the k th iteration, X k = 0 otherwise. X k -s are independent 0 1 random variables. X = m k=1 X k. E[X k ] = Pr[X k = 1] = #F 2. n ( ) 2 n E[Y ] = E[X] m ( ) 2 n m = E[X k ] m ( 2 n = m = #F. k=1 ) ( m 2 n ) #F Arindam Pal (IIT Delhi) Approximate Counting and MCMC April 8, / 45

14 Problem with the Naive Algorithm X k is a Bernoulli distribution with parameter p = #F 2 n. X is a binomial distribution with parameters m and p. By Chernoff bound, Pr[(1 ε)#f Y (1 + ε)#f ] ( ) X = Pr[(1 ε)p2 n 2 n (1 + ε)p2 n ] m = Pr[(1 ε)mp X (1 + ε)mp] 1 2e mpε2 /4. For this to be at least 1 δ, we must have, m 4 pε 2 ln 2 δ = 4 2n #F ε 2 ln 2 δ. If #F is sub-exponential (e.g. a polynomial) in n, this is an exponential-time algorithm and hence is not an FPRAS. Arindam Pal (IIT Delhi) Approximate Counting and MCMC April 8, / 45

15 Towards an FPRAS for DNF Counting F = C 1... C t. If C i = r i, there are 2 n r i satisfying assignments for C i. Let SC i be the set of assignments satisfying C i. Let U = {(i, a) : 1 i t and a SC i }. U = t i=1 SC i can be computed efficiently. We want to estimate #F = t i=1 SC i. Note that, #F U, since an assignment can satisfy more than one clause and so can appear in more than one pair in U. Arindam Pal (IIT Delhi) Approximate Counting and MCMC April 8, / 45

16 Sampling More Carefully The idea is to define a set S U with S = #F. S = {(i, a) : 1 i t and a SC i, but a / SC j for j < i}. We can estimate S by estimating the ratio S U, since we know U. Sample uniformly at random from U and count how many times they are in S. S is relatively dense in U, since each assignment can satisfy at most t different clauses, S U 1 t. Arindam Pal (IIT Delhi) Approximate Counting and MCMC April 8, / 45

17 How to Sample Uniformly at Random from U? First choose i with probability SC i U. Then choose an assignment a SC i uniformly at random. This can be done by setting each literal l / SC i to 1 with probability (independently and uniformly at random). 1 2 Pr[(i, a) is chosen] = Pr[i is chosen] Pr[a is chosen i is chosen] = SC i U = 1 U. 1 SC i This method samples uniformly at random from U. Arindam Pal (IIT Delhi) Approximate Counting and MCMC April 8, / 45

18 An FPRAS for DNF Counting 1 X 0. 2 for k = 1,..., m 3 with probability SC i U, choose uniformly at random, an 4 assignment a SC i. 5 if a / SC j for all j < i 6 X X end for 8 return Y ( X m) U. Arindam Pal (IIT Delhi) Approximate Counting and MCMC April 8, / 45

19 Analysis of the FPRAS A similar analysis shows that m 4t ε 2 ln 2 δ. Since the running time is polynomial in n, t, 1 ε and log( 1 δ ), this is an (ε, δ)-fpras. Arindam Pal (IIT Delhi) Approximate Counting and MCMC April 8, / 45

20 Introduction to Markov Chains Definition A finite Markov chain M is a discrete-time stochastic process defined over a finite set of n states Ω and an n n matrix P of transition probabilities. If X t is the state of M at time t, then the memorylessness property states that, Pr[X t+1 = j X 0 = i 0,..., X t 1 = i t 1, X t = i] = Pr[X t+1 = j X t = i] = P ij. M makes state transitions at discrete time steps t = 1, 2,... P has one row and one column for each state in Ω. The entry P ij is the probability that the next state will be j, given that the current state is i. For all i, j Ω, 0 P ij 1 and n j=1 P ij = 1. Arindam Pal (IIT Delhi) Approximate Counting and MCMC April 8, / 45

21 Classification of States For n 0, the n-step transition probability is defined as Pij n = Pr[X t+n = j X t = i]. State j is accessible from state i (i j), if for some n 0, Pij n > 0. If i j and j i, then we say that i and j communicate (i j). This is an equivalence relation. A Markov chain is irreducible if all states belong to one communicating class. This happens if and only if its graph representation is strongly connected. Let rij t be the probability that starting at state i, the first transition to state j occurs at time t. More precisely, r t ij = Pr[X t = j, X s j for 1 s t 1 X 0 = i]. Arindam Pal (IIT Delhi) Approximate Counting and MCMC April 8, / 45

22 Classification of States continued... A state i is recurrent (persistent) if f ii = t=1 rt ii = 1. Else, it is transient. A Markov chain is recurrent if every state is recurrent. The expected time to first reach j, after starting at i is h ij = t=1 t rt ij. This is called the hitting time. A recurrent state i is positive recurrent (non-null persistent) if h ii <. Otherwise, it is null recurrent (null persistent). Note that f ii = 1 does not mean that h ii <. A state i is called periodic, if there exists an integer T > 1 such that Pii n = Pr[X t+n = i X t = i] = 0, unless T n. T is called the period of i. Otherwise, i is called aperiodic. A Markov chain is called aperiodic if all its states are aperiodic. Arindam Pal (IIT Delhi) Approximate Counting and MCMC April 8, / 45

23 Ergodic Markov Chains and Stationary Distributions An aperiodic and positive recurrent state is called an ergodic state. A Markov chain is called ergodic if all its states are ergodic. A probability distribution π is called a stationary distribution for M with transition matrix P, if πp = π. Fundamental Theorem of Markov Chains Any finite, irreducible and aperiodic Markov chain is ergodic and has a unique stationary distribution π. Arindam Pal (IIT Delhi) Approximate Counting and MCMC April 8, / 45

24 Random Walks on Graphs Let G = (V, E) be a connected, non-bipartite, undirected graph, where V = n and E = m. This induces a Markov chain M G. The states of M G are the vertices of G. For any two vertices u, v V, { 1 d(u) if (u, v) E, P uv = 0 otherwise. M G is ergodic with stationary distribution π given by, π v = d(v) 2m. Arindam Pal (IIT Delhi) Approximate Counting and MCMC April 8, / 45

25 Counting the Number of Knapsack Solutions Let a = (a 0,..., a n 1 ) N n be an n-dimensional integer vector and let b N be any integer. Given an inequality a x = n 1 i=0 a ix i b, where x {0, 1} n. Compute the number N of such vectors x. Suppose a 0,..., a n 1 are the sizes of n items that can be packed into a knapsack of size b. N is the number of combinations of items that can be fitted into the knapsack. We have to count the number of Knapsack solutions. This problem is #P -complete. Note that n 1 i=0 a i > b. Else, n 1 i=0 a ix i n 1 i=0 a i b, and the number of solutions is 2 n. Arindam Pal (IIT Delhi) Approximate Counting and MCMC April 8, / 45

26 A Naive Random Sampling Algorithm 1 X 0. 2 for k = 1,..., m 3 for i = 0,..., n 1 4 set x i 1 with probability end for 6 if this random assignment satisfies a x b 7 X X end for 9 return Y ( X m) 2 n. Arindam Pal (IIT Delhi) Approximate Counting and MCMC April 8, / 45

27 Problem with this Approach Take a = (1,..., 1) and b = n 3. The expected number of trials before the event a x b occurs for the first time is exponential in n. A sequence of trials of reasonable length will typically yield a mean close to 0, even though the actual number of Knapsack solutions may be exponentially large. The variance of the estimator is too large for it to be of any practical value. Arindam Pal (IIT Delhi) Approximate Counting and MCMC April 8, / 45

28 Analysis of the Naive Algorithm Define a random variable, { 1 if a x b in the k th iteration, X k = 0 otherwise. X k -s are independent 0 1 random variables. X = m k=1 X k. E[X k ] = Pr[X k = 1] = N 2. n ( ) 2 n E[Y ] = E[X] m ( ) 2 n m = E[X k ] m ( 2 n = m = N. k=1 ) ( m 2 n ) N Arindam Pal (IIT Delhi) Approximate Counting and MCMC April 8, / 45

29 Problem with the Naive Algorithm X k is a Bernoulli distribution with parameter p = N 2 n. X is a binomial distribution with parameters m and p. By Chernoff bound, Pr[(1 ε)n Y (1 + ε)n] ( ) X = Pr[(1 ε)p2 n 2 n (1 + ε)p2 n ] m = Pr[(1 ε)mp X (1 + ε)mp] 1 2e mpε2 /4. For this to be at least 1 δ, we must have, m 4 pε 2 ln 2 δ = 4 2n N ε 2 ln 2 δ. If N is sub-exponential (e.g. a polynomial) in n, this is an exponential-time algorithm and hence is not an FPRAS. Arindam Pal (IIT Delhi) Approximate Counting and MCMC April 8, / 45

30 A Markov Chain Monte Carlo Algorithm Consider the Markov chain M Knapsack with state space Ω = {x {0, 1} n : a x b} and transitions from a state x = (x 0,..., x n 1 ) Ω to another state y = (y 0,..., y n 1 ) Ω is defined by the following rules: State transition rules for M Knapsack 1 set y = x with probability select i uniformly at random from the range 0 i n 1. 3 let y = (x 0,..., x i 1, 1 x i, x i+1,..., x n 1 ). 4 if a y b 5 y = y. 6 else 7 y = x. Arindam Pal (IIT Delhi) Approximate Counting and MCMC April 8, / 45

31 Properties of M Knapsack M Knapsack may be interpreted as a random walk (with stationary moves) on the n-dimensional boolean hypercube with vertex set {0, 1} n, truncated by the hyperplane a x b. It is ergodic, since all pairs of states intercommunicate via the state (0,..., 0), and the presence of loops ensures aperiodicity. It can be easily checked that the stationary distribution is uniform over Ω. Starting in state (0,..., 0), simulate M Knapsack for sufficiently many steps until the distribution over states is close to the uniform distribution. Return the current state as the result. This gives a procedure for sampling Knapsack solutions almost uniformly at random. Arindam Pal (IIT Delhi) Approximate Counting and MCMC April 8, / 45

32 Product Estimators Let V be the set of elements we want to enumerate. The size of V is typically exponentially large in terms of the natural size k of the problem. Suppose, we can find a chain of subsets V 0 V 1 V m = V such that for each i, V 0 is known (usually V 0 = 1). Vi+1 V i is polynomially bounded in k. m is polynomially bounded. There is a polynomial-time oracle to generate a random element uniformly distributed over V i, for each i : 1 i m. Then we can estimate the ratios V i+1 V i by generating a polynomial number of elements of V i+1 and counting how often we hit V i. The product of these estimates and of V 0 gives an estimate for V. This scheme typically results in a FPRAS. Arindam Pal (IIT Delhi) Approximate Counting and MCMC April 8, / 45

33 From Sampling to Estimation of Ω We keep the vector a fixed, but allow the bound b to vary. Let M Knapsack (b) and Ω(b) be the Markov chain and its state space as functions of b. Assume without loss of generality that a 0 a 1... a n 1. Define b 0 = 0 and b i = min{b, i 1 j=0 a j}, for 1 i n. Note that b n = b, since n 1 j=0 a j > b. It is easy to see that Ω(b i 1 ) Ω(b i ) (n + 1) Ω(b i 1 ), for 1 i n, since any element of Ω(b i ) can be converted to an element of Ω(b i 1 ) by changing the rightmost 1 to a 0. Conversely, from any element of Ω(b i 1 ) we can get at most n + 1 elements of Ω(b i ). Arindam Pal (IIT Delhi) Approximate Counting and MCMC April 8, / 45

34 Estimating the Value of Ω Ω(b) = Ω(b n ) = Ω(b 0 ) = n i=1 1 ρ i. n i=1 Ω(b i ) Ω(b i 1 ) Note that Ω(b 0 ) = 1. ρ i = Ω(b i 1) Ω(b i ) 1 n+1. ρ i can be estimated by sampling almost uniformly from Ω(b i ) using the Markov chain M Knapsack (b i ), and computing the fraction of the samples that are in Ω(b i 1 ). Arindam Pal (IIT Delhi) Approximate Counting and MCMC April 8, / 45

35 Analysis Consider the random variable X i associated with a single run of the Markov chain M Knapsack (b i ). { 1 if the final state is a member of Ω(b i 1 ), X i = 0 otherwise. If we were able to simulate M Knapsack (b i ) to infinity, then we have E[X i ] = ρ i. However, we must terminate the simulation at some point, thereby introducing a small bias. We will ignore this and assume that E[X i ] = ρ i and Var[X i ] = ρ i (1 ρ i ). Arindam Pal (IIT Delhi) Approximate Counting and MCMC April 8, / 45

36 Analysis continued... Suppose we perform t = 17n 2 ɛ 2 trials, and let X i be the sample mean. Var[X i ] E[X i ] 2 = ρ i (1 ρ i ) t ρ 2 i = 1 ρ i tρ i n t, since ρ i 1 n + 1 = ɛ2 17n. Suppose this process is repeated for each ρ i. Arindam Pal (IIT Delhi) Approximate Counting and MCMC April 8, / 45

37 Analysis continued... Let Z = n i=1 X i. Note that X i are independent. Hence, E[Z] = n i=1 E[X i] = n i=1 ρ i = 1 Ω(b). Var[Z] E[Z] 2 = E[Z2 ] E[Z] 2 E[Z] 2 = E[ n i=1 X2 i ] (E[ n i=1 X i]) 1 2 n i=1 = E[X2 i ] ( n i=1 E[X i]) 1 2 n = = i=1 n i=1 ( E[Xi ] 2 + Var[X i ] E[X i ] 2 ( 1 + Var[X i] E[X i ] 2 ) 1. ) 1 Arindam Pal (IIT Delhi) Approximate Counting and MCMC April 8, / 45

38 Analysis continued... Var[Z] E[Z] 2 ) n (1 + ɛ2 1 17n ɛ2 16. By Chebyshev s inequality, we conclude that [ ( Pr 1 ɛ ) 1 (1 2 Ω(b) Z + ɛ ) ] Ω(b) 4. The number of trials (Markov chain simulations) used is nt = 17n 3 ɛ 2, which is polynomial in n and 1 ɛ. This is an FPRAS for the number of Knapsack solutions, provided that M Knapsack is rapidly mixing. Arindam Pal (IIT Delhi) Approximate Counting and MCMC April 8, / 45

39 Rapidly Mixing Markov Chains A Markov chain is rapidly mixing, if it converges to the stationary distribution after a polynomial number of steps in n. This is a non-trivial condition for M Knapsack, since the size of the state space Ω is exponential in n. It is not known if M Knapsack is rapidly mixing. Whether there exists an FPRAS of any kind for the Knapsack counting problem is still unresolved. There are some recent advances in this area. Three techniques for bounding the mixing time of a Markov chain are coupling, canonical paths and conductance. Arindam Pal (IIT Delhi) Approximate Counting and MCMC April 8, / 45

40 Variation Distance and Mixing Time Let M be an ergodic Markov chain on state space Ω with transition probabilities P : Ω 2 [0, 1]. Denote by P t (x, S) the distribution of the state S Ω at time t, given that x Ω is the initial state. Let π be the stationary distribution of M. The variation distance at time t with respect to x is defined as x (t) = max P t (x, S) π(s) S Ω = 1 P t (x, y) π(y). 2 y Ω The mixing time of the Markov chain M is given by τ x (ε) = min{t : x (t ) ε for all t t}. Arindam Pal (IIT Delhi) Approximate Counting and MCMC April 8, / 45

41 Time-reversible Markov Chains A Markov chain M is called time-reversible if The loop probabilities P (x, x) 1 2, for all x Ω. It satisfies the following detailed balance condition: Q(x, y) = π(x)p (x, y) = π(y)p (y, x), for all x, y Ω. The detailed balance condition is stronger than that required merely for a stationary distribution. There are Markov chains with stationary distributions that do not have detailed balance. Detailed balance implies that, around any closed cycle of states, there is no net flow of probability. Arindam Pal (IIT Delhi) Approximate Counting and MCMC April 8, / 45

42 Canonical Paths We think of M as an undirected graph G = (Ω, E), where E = {(x, y) Ω 2 : Q(x, y) > 0}. For each pair (x, y) Ω 2, we specify a canonical path p xy from x to y in the graph G. The canonical path p xy corresponds to a sequence of legal transitions in M from the initial state x to the final state y. Let P = {p xy : x, y Ω} be the set of all canonical paths. The maximum load (congestion) on any edge in P is defined as ρ(p ) = max e E 1 Q(e) p xy e π(x)π(y) p xy. Intuitively, we expect a Markov chain to be rapidly mixing, if it contains no bottlenecks, i.e., if it admits a choice of paths P for which ρ(p ) is not too large. Arindam Pal (IIT Delhi) Approximate Counting and MCMC April 8, / 45

43 Relationship between Mixing Time and Congestion Theorem Let M be a finite, reversible, ergodic Markov chain with loop probabilities P (x, x) 1 2 for all states x. Let P be a set of canonical paths with maximum edge congestion ρ. Then, for any choice of the initial state x, the mixing time of M satisfies ( τ x (ε) ρ ln 1 π(x) + ln 1 ). ε Good upper bounds on the congestion ρ translate to good upper bounds on the mixing time τ x (ε). Arindam Pal (IIT Delhi) Approximate Counting and MCMC April 8, / 45

44 Conductance The conductance Φ of a Markov chain M is defined as Q(S, S) = Q(x, y), Φ(M) = x S,y S (x,y) E Q(S, S) min S Ω π(s). 0<π(S) 1 2 The conductance Φ may be viewed as a weighted version of edge expansion of the graph G = (Ω, E) associated with M. Alternatively, the ratio can be interpreted as the conditional probability that the chain in equilibrium escapes from the subset S of the state space Ω in one step, given that it is initially in S. Φ measures the readiness of the chain to escape from any small enough region of Ω to make rapid progress towards equilibrium. Arindam Pal (IIT Delhi) Approximate Counting and MCMC April 8, / 45

45 Relationship between Mixing Time and Conductance Theorem Let M be a finite, reversible, ergodic Markov chain with loop probabilities P (x, x) 1 2 for all states x. Let Φ be the conductance of M. Then, for any choice of the initial state x, the mixing time of M satisfies τ x (ε) 2 ( Φ 2 ln 1 π(x) + ln 1 ). ε Good lower bounds on the conductance Φ translate to good upper bounds on the mixing time τ x (ε). Arindam Pal (IIT Delhi) Approximate Counting and MCMC April 8, / 45

The Monte Carlo Method

The Monte Carlo Method The Monte Carlo Method Example: estimate the value of π. Choose X and Y independently and uniformly at random in [0, 1]. Let Pr(Z = 1) = π 4. 4E[Z] = π. { 1 if X Z = 2 + Y 2 1, 0 otherwise, Let Z 1,...,

More information

Model Counting for Logical Theories

Model Counting for Logical Theories Model Counting for Logical Theories Wednesday Dmitry Chistikov Rayna Dimitrova Department of Computer Science University of Oxford, UK Max Planck Institute for Software Systems (MPI-SWS) Kaiserslautern

More information

Approximate Counting

Approximate Counting Approximate Counting Andreas-Nikolas Göbel National Technical University of Athens, Greece Advanced Algorithms, May 2010 Outline 1 The Monte Carlo Method Introduction DNFSAT Counting 2 The Markov Chain

More information

Sampling Good Motifs with Markov Chains

Sampling Good Motifs with Markov Chains Sampling Good Motifs with Markov Chains Chris Peikert December 10, 2004 Abstract Markov chain Monte Carlo (MCMC) techniques have been used with some success in bioinformatics [LAB + 93]. However, these

More information

25.1 Markov Chain Monte Carlo (MCMC)

25.1 Markov Chain Monte Carlo (MCMC) CS880: Approximations Algorithms Scribe: Dave Andrzejewski Lecturer: Shuchi Chawla Topic: Approx counting/sampling, MCMC methods Date: 4/4/07 The previous lecture showed that, for self-reducible problems,

More information

Essentials on the Analysis of Randomized Algorithms

Essentials on the Analysis of Randomized Algorithms Essentials on the Analysis of Randomized Algorithms Dimitris Diochnos Feb 0, 2009 Abstract These notes were written with Monte Carlo algorithms primarily in mind. Topics covered are basic (discrete) random

More information

Markov Chains and MCMC

Markov Chains and MCMC Markov Chains and MCMC CompSci 590.02 Instructor: AshwinMachanavajjhala Lecture 4 : 590.02 Spring 13 1 Recap: Monte Carlo Method If U is a universe of items, and G is a subset satisfying some property,

More information

Probability & Computing

Probability & Computing Probability & Computing Stochastic Process time t {X t t 2 T } state space Ω X t 2 state x 2 discrete time: T is countable T = {0,, 2,...} discrete space: Ω is finite or countably infinite X 0,X,X 2,...

More information

Conductance and Rapidly Mixing Markov Chains

Conductance and Rapidly Mixing Markov Chains Conductance and Rapidly Mixing Markov Chains Jamie King james.king@uwaterloo.ca March 26, 2003 Abstract Conductance is a measure of a Markov chain that quantifies its tendency to circulate around its states.

More information

Midterm 2 Review. CS70 Summer Lecture 6D. David Dinh 28 July UC Berkeley

Midterm 2 Review. CS70 Summer Lecture 6D. David Dinh 28 July UC Berkeley Midterm 2 Review CS70 Summer 2016 - Lecture 6D David Dinh 28 July 2016 UC Berkeley Midterm 2: Format 8 questions, 190 points, 110 minutes (same as MT1). Two pages (one double-sided sheet) of handwritten

More information

Disjointness and Additivity

Disjointness and Additivity Midterm 2: Format Midterm 2 Review CS70 Summer 2016 - Lecture 6D David Dinh 28 July 2016 UC Berkeley 8 questions, 190 points, 110 minutes (same as MT1). Two pages (one double-sided sheet) of handwritten

More information

An Algorithmist s Toolkit September 24, Lecture 5

An Algorithmist s Toolkit September 24, Lecture 5 8.49 An Algorithmist s Toolkit September 24, 29 Lecture 5 Lecturer: Jonathan Kelner Scribe: Shaunak Kishore Administrivia Two additional resources on approximating the permanent Jerrum and Sinclair s original

More information

Counting and sampling paths in graphs

Counting and sampling paths in graphs Rochester Institute of Technology RIT Scholar Works Theses Thesis/Dissertation Collections 2008 Counting and sampling paths in graphs T. Ryan Hoens Follow this and additional works at: http://scholarworks.rit.edu/theses

More information

Topic: Sampling, Medians of Means method and DNF counting Date: October 6, 2004 Scribe: Florin Oprea

Topic: Sampling, Medians of Means method and DNF counting Date: October 6, 2004 Scribe: Florin Oprea 15-859(M): Randomized Algorithms Lecturer: Shuchi Chawla Topic: Sampling, Medians of Means method and DNF counting Date: October 6, 200 Scribe: Florin Oprea 8.1 Introduction In this lecture we will consider

More information

Lecture 14: October 22

Lecture 14: October 22 CS294 Markov Chain Monte Carlo: Foundations & Applications Fall 2009 Lecture 14: October 22 Lecturer: Alistair Sinclair Scribes: Alistair Sinclair Disclaimer: These notes have not been subjected to the

More information

Homework 10 Solution

Homework 10 Solution CS 174: Combinatorics and Discrete Probability Fall 2012 Homewor 10 Solution Problem 1. (Exercise 10.6 from MU 8 points) The problem of counting the number of solutions to a napsac instance can be defined

More information

Powerful tool for sampling from complicated distributions. Many use Markov chains to model events that arise in nature.

Powerful tool for sampling from complicated distributions. Many use Markov chains to model events that arise in nature. Markov Chains Markov chains: 2SAT: Powerful tool for sampling from complicated distributions rely only on local moves to explore state space. Many use Markov chains to model events that arise in nature.

More information

Polynomial-time counting and sampling of two-rowed contingency tables

Polynomial-time counting and sampling of two-rowed contingency tables Polynomial-time counting and sampling of two-rowed contingency tables Martin Dyer and Catherine Greenhill School of Computer Studies University of Leeds Leeds LS2 9JT UNITED KINGDOM Abstract In this paper

More information

An Introduction to Randomized algorithms

An Introduction to Randomized algorithms An Introduction to Randomized algorithms C.R. Subramanian The Institute of Mathematical Sciences, Chennai. Expository talk presented at the Research Promotion Workshop on Introduction to Geometric and

More information

Lecture 2: September 8

Lecture 2: September 8 CS294 Markov Chain Monte Carlo: Foundations & Applications Fall 2009 Lecture 2: September 8 Lecturer: Prof. Alistair Sinclair Scribes: Anand Bhaskar and Anindya De Disclaimer: These notes have not been

More information

EECS 495: Randomized Algorithms Lecture 14 Random Walks. j p ij = 1. Pr[X t+1 = j X 0 = i 0,..., X t 1 = i t 1, X t = i] = Pr[X t+

EECS 495: Randomized Algorithms Lecture 14 Random Walks. j p ij = 1. Pr[X t+1 = j X 0 = i 0,..., X t 1 = i t 1, X t = i] = Pr[X t+ EECS 495: Randomized Algorithms Lecture 14 Random Walks Reading: Motwani-Raghavan Chapter 6 Powerful tool for sampling complicated distributions since use only local moves Given: to explore state space.

More information

Stochastic optimization Markov Chain Monte Carlo

Stochastic optimization Markov Chain Monte Carlo Stochastic optimization Markov Chain Monte Carlo Ethan Fetaya Weizmann Institute of Science 1 Motivation Markov chains Stationary distribution Mixing time 2 Algorithms Metropolis-Hastings Simulated Annealing

More information

Stat 516, Homework 1

Stat 516, Homework 1 Stat 516, Homework 1 Due date: October 7 1. Consider an urn with n distinct balls numbered 1,..., n. We sample balls from the urn with replacement. Let N be the number of draws until we encounter a ball

More information

Markov Chains, Random Walks on Graphs, and the Laplacian

Markov Chains, Random Walks on Graphs, and the Laplacian Markov Chains, Random Walks on Graphs, and the Laplacian CMPSCI 791BB: Advanced ML Sridhar Mahadevan Random Walks! There is significant interest in the problem of random walks! Markov chain analysis! Computer

More information

The Markov Chain Monte Carlo Method

The Markov Chain Monte Carlo Method The Markov Chain Monte Carlo Method Idea: define an ergodic Markov chain whose stationary distribution is the desired probability distribution. Let X 0, X 1, X 2,..., X n be the run of the chain. The Markov

More information

Markov Chains and Stochastic Sampling

Markov Chains and Stochastic Sampling Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,

More information

Markov chains. Randomness and Computation. Markov chains. Markov processes

Markov chains. Randomness and Computation. Markov chains. Markov processes Markov chains Randomness and Computation or, Randomized Algorithms Mary Cryan School of Informatics University of Edinburgh Definition (Definition 7) A discrete-time stochastic process on the state space

More information

On Markov chains for independent sets

On Markov chains for independent sets On Markov chains for independent sets Martin Dyer and Catherine Greenhill School of Computer Studies University of Leeds Leeds LS2 9JT United Kingdom Submitted: 8 December 1997. Revised: 16 February 1999

More information

Not all counting problems are efficiently approximable. We open with a simple example.

Not all counting problems are efficiently approximable. We open with a simple example. Chapter 7 Inapproximability Not all counting problems are efficiently approximable. We open with a simple example. Fact 7.1. Unless RP = NP there can be no FPRAS for the number of Hamilton cycles in a

More information

Homework 4 Solutions

Homework 4 Solutions CS 174: Combinatorics and Discrete Probability Fall 01 Homework 4 Solutions Problem 1. (Exercise 3.4 from MU 5 points) Recall the randomized algorithm discussed in class for finding the median of a set

More information

Randomized Algorithms

Randomized Algorithms Randomized Algorithms Prof. Tapio Elomaa tapio.elomaa@tut.fi Course Basics A new 4 credit unit course Part of Theoretical Computer Science courses at the Department of Mathematics There will be 4 hours

More information

Lecture 13 March 7, 2017

Lecture 13 March 7, 2017 CS 224: Advanced Algorithms Spring 2017 Prof. Jelani Nelson Lecture 13 March 7, 2017 Scribe: Hongyao Ma Today PTAS/FPTAS/FPRAS examples PTAS: knapsack FPTAS: knapsack FPRAS: DNF counting Approximation

More information

Probabilistic verification and approximation schemes

Probabilistic verification and approximation schemes Probabilistic verification and approximation schemes Richard Lassaigne Equipe de Logique mathématique, CNRS-Université Paris 7 Joint work with Sylvain Peyronnet (LRDE/EPITA & Equipe de Logique) Plan 1

More information

Ch5. Markov Chain Monte Carlo

Ch5. Markov Chain Monte Carlo ST4231, Semester I, 2003-2004 Ch5. Markov Chain Monte Carlo In general, it is very difficult to simulate the value of a random vector X whose component random variables are dependent. In this chapter we

More information

Lecture 5: Random Walks and Markov Chain

Lecture 5: Random Walks and Markov Chain Spectral Graph Theory and Applications WS 20/202 Lecture 5: Random Walks and Markov Chain Lecturer: Thomas Sauerwald & He Sun Introduction to Markov Chains Definition 5.. A sequence of random variables

More information

ACO Comprehensive Exam October 14 and 15, 2013

ACO Comprehensive Exam October 14 and 15, 2013 1. Computability, Complexity and Algorithms (a) Let G be the complete graph on n vertices, and let c : V (G) V (G) [0, ) be a symmetric cost function. Consider the following closest point heuristic for

More information

6 Markov Chain Monte Carlo (MCMC)

6 Markov Chain Monte Carlo (MCMC) 6 Markov Chain Monte Carlo (MCMC) The underlying idea in MCMC is to replace the iid samples of basic MC methods, with dependent samples from an ergodic Markov chain, whose limiting (stationary) distribution

More information

INTRODUCTION TO MCMC AND PAGERANK. Eric Vigoda Georgia Tech. Lecture for CS 6505

INTRODUCTION TO MCMC AND PAGERANK. Eric Vigoda Georgia Tech. Lecture for CS 6505 INTRODUCTION TO MCMC AND PAGERANK Eric Vigoda Georgia Tech Lecture for CS 6505 1 MARKOV CHAIN BASICS 2 ERGODICITY 3 WHAT IS THE STATIONARY DISTRIBUTION? 4 PAGERANK 5 MIXING TIME 6 PREVIEW OF FURTHER TOPICS

More information

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8.1 Review 8.2 Statistical Equilibrium 8.3 Two-State Markov Chain 8.4 Existence of P ( ) 8.5 Classification of States

More information

Analysis of Markov chain algorithms on spanning trees, rooted forests, and connected subgraphs

Analysis of Markov chain algorithms on spanning trees, rooted forests, and connected subgraphs Analysis of Markov chain algorithms on spanning trees, rooted forests, and connected subgraphs Johannes Fehrenbach and Ludger Rüschendorf University of Freiburg Abstract In this paper we analyse a natural

More information

Mixing Rates for the Gibbs Sampler over Restricted Boltzmann Machines

Mixing Rates for the Gibbs Sampler over Restricted Boltzmann Machines Mixing Rates for the Gibbs Sampler over Restricted Boltzmann Machines Christopher Tosh Department of Computer Science and Engineering University of California, San Diego ctosh@cs.ucsd.edu Abstract The

More information

Lecture 11 October 7, 2013

Lecture 11 October 7, 2013 CS 4: Advanced Algorithms Fall 03 Prof. Jelani Nelson Lecture October 7, 03 Scribe: David Ding Overview In the last lecture we talked about set cover: Sets S,..., S m {,..., n}. S has cost c S. Goal: Cover

More information

Testing Expansion in Bounded-Degree Graphs

Testing Expansion in Bounded-Degree Graphs Testing Expansion in Bounded-Degree Graphs Artur Czumaj Department of Computer Science University of Warwick Coventry CV4 7AL, UK czumaj@dcs.warwick.ac.uk Christian Sohler Department of Computer Science

More information

Lecture 19: November 10

Lecture 19: November 10 CS294 Markov Chain Monte Carlo: Foundations & Applications Fall 2009 Lecture 19: November 10 Lecturer: Prof. Alistair Sinclair Scribes: Kevin Dick and Tanya Gordeeva Disclaimer: These notes have not been

More information

University of Chicago Autumn 2003 CS Markov Chain Monte Carlo Methods. Lecture 7: November 11, 2003 Estimating the permanent Eric Vigoda

University of Chicago Autumn 2003 CS Markov Chain Monte Carlo Methods. Lecture 7: November 11, 2003 Estimating the permanent Eric Vigoda University of Chicago Autumn 2003 CS37101-1 Markov Chain Monte Carlo Methods Lecture 7: November 11, 2003 Estimating the permanent Eric Vigoda We refer the reader to Jerrum s book [1] for the analysis

More information

Random Variable. Pr(X = a) = Pr(s)

Random Variable. Pr(X = a) = Pr(s) Random Variable Definition A random variable X on a sample space Ω is a real-valued function on Ω; that is, X : Ω R. A discrete random variable is a random variable that takes on only a finite or countably

More information

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks Recap Probability, stochastic processes, Markov chains ELEC-C7210 Modeling and analysis of communication networks 1 Recap: Probability theory important distributions Discrete distributions Geometric distribution

More information

Counting Linear Extensions of a Partial Order

Counting Linear Extensions of a Partial Order Counting Linear Extensions of a Partial Order Seth Harris March 6, 20 Introduction A partially ordered set (P,

More information

INTRODUCTION TO MCMC AND PAGERANK. Eric Vigoda Georgia Tech. Lecture for CS 6505

INTRODUCTION TO MCMC AND PAGERANK. Eric Vigoda Georgia Tech. Lecture for CS 6505 INTRODUCTION TO MCMC AND PAGERANK Eric Vigoda Georgia Tech Lecture for CS 6505 1 MARKOV CHAIN BASICS 2 ERGODICITY 3 WHAT IS THE STATIONARY DISTRIBUTION? 4 PAGERANK 5 MIXING TIME 6 PREVIEW OF FURTHER TOPICS

More information

Sampling Contingency Tables

Sampling Contingency Tables Sampling Contingency Tables Martin Dyer Ravi Kannan John Mount February 3, 995 Introduction Given positive integers and, let be the set of arrays with nonnegative integer entries and row sums respectively

More information

Definition A finite Markov chain is a memoryless homogeneous discrete stochastic process with a finite number of states.

Definition A finite Markov chain is a memoryless homogeneous discrete stochastic process with a finite number of states. Chapter 8 Finite Markov Chains A discrete system is characterized by a set V of states and transitions between the states. V is referred to as the state space. We think of the transitions as occurring

More information

Lecture 8: Path Technology

Lecture 8: Path Technology Counting and Sampling Fall 07 Lecture 8: Path Technology Lecturer: Shayan Oveis Gharan October 0 Disclaimer: These notes have not been subjected to the usual scrutiny reserved for formal publications.

More information

Section Notes 9. Midterm 2 Review. Applied Math / Engineering Sciences 121. Week of December 3, 2018

Section Notes 9. Midterm 2 Review. Applied Math / Engineering Sciences 121. Week of December 3, 2018 Section Notes 9 Midterm 2 Review Applied Math / Engineering Sciences 121 Week of December 3, 2018 The following list of topics is an overview of the material that was covered in the lectures and sections

More information

Non-Interactive Zero Knowledge (II)

Non-Interactive Zero Knowledge (II) Non-Interactive Zero Knowledge (II) CS 601.442/642 Modern Cryptography Fall 2017 S 601.442/642 Modern CryptographyNon-Interactive Zero Knowledge (II) Fall 2017 1 / 18 NIZKs for NP: Roadmap Last-time: Transformation

More information

CS 781 Lecture 9 March 10, 2011 Topics: Local Search and Optimization Metropolis Algorithm Greedy Optimization Hopfield Networks Max Cut Problem Nash

CS 781 Lecture 9 March 10, 2011 Topics: Local Search and Optimization Metropolis Algorithm Greedy Optimization Hopfield Networks Max Cut Problem Nash CS 781 Lecture 9 March 10, 2011 Topics: Local Search and Optimization Metropolis Algorithm Greedy Optimization Hopfield Networks Max Cut Problem Nash Equilibrium Price of Stability Coping With NP-Hardness

More information

The Tutte polynomial: sign and approximability

The Tutte polynomial: sign and approximability : sign and approximability Mark Jerrum School of Mathematical Sciences Queen Mary, University of London Joint work with Leslie Goldberg, Department of Computer Science, University of Oxford Durham 23rd

More information

Rapidly Mixing Markov Chains for Sampling Contingency Tables with a Constant Number of Rows

Rapidly Mixing Markov Chains for Sampling Contingency Tables with a Constant Number of Rows Rapidly Mixing Markov Chains for Sampling Contingency Tables with a Constant Number of Rows Mary Cryan School of Informatics University of Edinburgh Edinburgh EH9 3JZ, UK Leslie Ann Goldberg Department

More information

6.842 Randomness and Computation February 24, Lecture 6

6.842 Randomness and Computation February 24, Lecture 6 6.8 Randomness and Computation February, Lecture 6 Lecturer: Ronitt Rubinfeld Scribe: Mutaamba Maasha Outline Random Walks Markov Chains Stationary Distributions Hitting, Cover, Commute times Markov Chains

More information

http://www.math.uah.edu/stat/markov/.xhtml 1 of 9 7/16/2009 7:20 AM Virtual Laboratories > 16. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 1. A Markov process is a random process in which the future is

More information

c 2006 Society for Industrial and Applied Mathematics

c 2006 Society for Industrial and Applied Mathematics SIAM J. COMPUT. Vol. 36, No. 1, pp. 247 278 c 2006 Society for Industrial and Applied Mathematics RAPIDLY MIXING MARKOV CHAINS FOR SAMPLING CONTINGENCY TABLES WITH A CONSTANT NUMBER OF ROWS MARY CRYAN,

More information

LIMITING PROBABILITY TRANSITION MATRIX OF A CONDENSED FIBONACCI TREE

LIMITING PROBABILITY TRANSITION MATRIX OF A CONDENSED FIBONACCI TREE International Journal of Applied Mathematics Volume 31 No. 18, 41-49 ISSN: 1311-178 (printed version); ISSN: 1314-86 (on-line version) doi: http://dx.doi.org/1.173/ijam.v31i.6 LIMITING PROBABILITY TRANSITION

More information

Generating random spanning trees. Andrei Broder. DEC - Systems Research Center 130 Lytton Ave., Palo Alto, CA

Generating random spanning trees. Andrei Broder. DEC - Systems Research Center 130 Lytton Ave., Palo Alto, CA Generating random spanning trees Andrei Broder DEC - Systems Research Center 30 Lytton Ave., Palo Alto, CA 9430 Extended abstract Abstract. This paper describes a probabilistic algorithm that, given a

More information

9.1 HW (3+3+8 points) (Knapsack decision problem NP-complete)

9.1 HW (3+3+8 points) (Knapsack decision problem NP-complete) Algorithms CMSC-27200 http://alg15.cs.uchicago.edu Homework set #9. Due Wednesday, March 11, 2015 Posted 3-5. Updated 3-6 12:15am. Problem 9.6 added 3-7 1:15pm. Read the homework instructions on the website.

More information

Markov Chains. Andreas Klappenecker by Andreas Klappenecker. All rights reserved. Texas A&M University

Markov Chains. Andreas Klappenecker by Andreas Klappenecker. All rights reserved. Texas A&M University Markov Chains Andreas Klappenecker Texas A&M University 208 by Andreas Klappenecker. All rights reserved. / 58 Stochastic Processes A stochastic process X tx ptq: t P T u is a collection of random variables.

More information

6.842 Randomness and Computation Lecture 5

6.842 Randomness and Computation Lecture 5 6.842 Randomness and Computation 2012-02-22 Lecture 5 Lecturer: Ronitt Rubinfeld Scribe: Michael Forbes 1 Overview Today we will define the notion of a pairwise independent hash function, and discuss its

More information

CS261: A Second Course in Algorithms Lecture #18: Five Essential Tools for the Analysis of Randomized Algorithms

CS261: A Second Course in Algorithms Lecture #18: Five Essential Tools for the Analysis of Randomized Algorithms CS261: A Second Course in Algorithms Lecture #18: Five Essential Tools for the Analysis of Randomized Algorithms Tim Roughgarden March 3, 2016 1 Preamble In CS109 and CS161, you learned some tricks of

More information

Rapid Mixing of the Switch Markov Chain for Strongly Stable Degree Sequences and 2-Class Joint Degree Matrices

Rapid Mixing of the Switch Markov Chain for Strongly Stable Degree Sequences and 2-Class Joint Degree Matrices Rapid Mixing of the Switch Markov Chain for Strongly Stable Degree Sequences and 2-Class Joint Degree Matrices Georgios Amanatidis Centrum Wiskunde & Informatica (CWI) Amsterdam, The Netherlands georgios.amanatidis@cwi.nl

More information

Markov Chains and MCMC

Markov Chains and MCMC Markov Chains and MCMC Markov chains Let S = {1, 2,..., N} be a finite set consisting of N states. A Markov chain Y 0, Y 1, Y 2,... is a sequence of random variables, with Y t S for all points in time

More information

Markov Chain Monte Carlo Methods

Markov Chain Monte Carlo Methods Markov Chain Monte Carlo Methods p. /36 Markov Chain Monte Carlo Methods Michel Bierlaire michel.bierlaire@epfl.ch Transport and Mobility Laboratory Markov Chain Monte Carlo Methods p. 2/36 Markov Chains

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected 4. Markov Chains A discrete time process {X n,n = 0,1,2,...} with discrete state space X n {0,1,2,...} is a Markov chain if it has the Markov property: P[X n+1 =j X n =i,x n 1 =i n 1,...,X 0 =i 0 ] = P[X

More information

Finding Satisfying Assignments by Random Walk

Finding Satisfying Assignments by Random Walk Ferienakademie, Sarntal 2010 Finding Satisfying Assignments by Random Walk Rolf Wanka, Erlangen Overview Preliminaries A Randomized Polynomial-time Algorithm for 2-SAT A Randomized O(2 n )-time Algorithm

More information

Lecture 16: October 29

Lecture 16: October 29 CS294 Markov Chain Monte Carlo: Foundations & Applications Fall 2009 Lecture 16: October 29 Lecturer: Alistair Sinclair Scribes: Disclaimer: These notes have not been subjected to the usual scrutiny reserved

More information

215 Problem 1. (a) Define the total variation distance µ ν tv for probability distributions µ, ν on a finite set S. Show that

215 Problem 1. (a) Define the total variation distance µ ν tv for probability distributions µ, ν on a finite set S. Show that 15 Problem 1. (a) Define the total variation distance µ ν tv for probability distributions µ, ν on a finite set S. Show that µ ν tv = (1/) x S µ(x) ν(x) = x S(µ(x) ν(x)) + where a + = max(a, 0). Show that

More information

On Markov Chain Monte Carlo

On Markov Chain Monte Carlo MCMC 0 On Markov Chain Monte Carlo Yevgeniy Kovchegov Oregon State University MCMC 1 Metropolis-Hastings algorithm. Goal: simulating an Ω-valued random variable distributed according to a given probability

More information

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions Instructor: Erik Sudderth Brown University Computer Science April 14, 215 Review: Discrete Markov Chains Some

More information

Lecture 6: Random Walks versus Independent Sampling

Lecture 6: Random Walks versus Independent Sampling Spectral Graph Theory and Applications WS 011/01 Lecture 6: Random Walks versus Independent Sampling Lecturer: Thomas Sauerwald & He Sun For many problems it is necessary to draw samples from some distribution

More information

MARKOV CHAINS AND HIDDEN MARKOV MODELS

MARKOV CHAINS AND HIDDEN MARKOV MODELS MARKOV CHAINS AND HIDDEN MARKOV MODELS MERYL SEAH Abstract. This is an expository paper outlining the basics of Markov chains. We start the paper by explaining what a finite Markov chain is. Then we describe

More information

Decomposition Methods and Sampling Circuits in the Cartesian Lattice

Decomposition Methods and Sampling Circuits in the Cartesian Lattice Decomposition Methods and Sampling Circuits in the Cartesian Lattice Dana Randall College of Computing and School of Mathematics Georgia Institute of Technology Atlanta, GA 30332-0280 randall@math.gatech.edu

More information

Introduction to Restricted Boltzmann Machines

Introduction to Restricted Boltzmann Machines Introduction to Restricted Boltzmann Machines Ilija Bogunovic and Edo Collins EPFL {ilija.bogunovic,edo.collins}@epfl.ch October 13, 2014 Introduction Ingredients: 1. Probabilistic graphical models (undirected,

More information

Formally, a network is modeled as a graph G, each of whose edges e is presumed to fail (disappear) with some probability p e and thus to survive with

Formally, a network is modeled as a graph G, each of whose edges e is presumed to fail (disappear) with some probability p e and thus to survive with A Randomized Fully Polynomial Time Approximation Scheme for the All-Terminal Network Reliability Problem David R. Karger April 23, 1999 Abstract The classic all-terminal network reliability problem posits

More information

Lecture 17: D-Stable Polynomials and Lee-Yang Theorems

Lecture 17: D-Stable Polynomials and Lee-Yang Theorems Counting and Sampling Fall 2017 Lecture 17: D-Stable Polynomials and Lee-Yang Theorems Lecturer: Shayan Oveis Gharan November 29th Disclaimer: These notes have not been subjected to the usual scrutiny

More information

INTRODUCTION TO MARKOV CHAIN MONTE CARLO

INTRODUCTION TO MARKOV CHAIN MONTE CARLO INTRODUCTION TO MARKOV CHAIN MONTE CARLO 1. Introduction: MCMC In its simplest incarnation, the Monte Carlo method is nothing more than a computerbased exploitation of the Law of Large Numbers to estimate

More information

STOCHASTIC PROCESSES Basic notions

STOCHASTIC PROCESSES Basic notions J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving

More information

Quantum walk algorithms

Quantum walk algorithms Quantum walk algorithms Andrew Childs Institute for Quantum Computing University of Waterloo 28 September 2011 Randomized algorithms Randomness is an important tool in computer science Black-box problems

More information

NP Complete Problems. COMP 215 Lecture 20

NP Complete Problems. COMP 215 Lecture 20 NP Complete Problems COMP 215 Lecture 20 Complexity Theory Complexity theory is a research area unto itself. The central project is classifying problems as either tractable or intractable. Tractable Worst

More information

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321 Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process

More information

Theory of Computation CS3102 Spring 2014 A tale of computers, math, problem solving, life, love and tragic death

Theory of Computation CS3102 Spring 2014 A tale of computers, math, problem solving, life, love and tragic death Theory of Computation CS3102 Spring 2014 A tale of computers, math, problem solving, life, love and tragic death Nathan Brunelle Department of Computer Science University of Virginia www.cs.virginia.edu/~njb2b/theory

More information

Lecture 9: Counting Matchings

Lecture 9: Counting Matchings Counting and Sampling Fall 207 Lecture 9: Counting Matchings Lecturer: Shayan Oveis Gharan October 20 Disclaimer: These notes have not been subjected to the usual scrutiny reserved for formal publications.

More information

Annealing and Tempering for Sampling and Counting. Nayantara Bhatnagar

Annealing and Tempering for Sampling and Counting. Nayantara Bhatnagar Annealing and Tempering for Sampling and Counting A Thesis Presented to The Academic Faculty by Nayantara Bhatnagar In Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy Algorithms,

More information

Theory of Computation Chapter 9

Theory of Computation Chapter 9 0-0 Theory of Computation Chapter 9 Guan-Shieng Huang May 12, 2003 NP-completeness Problems NP: the class of languages decided by nondeterministic Turing machine in polynomial time NP-completeness: Cook

More information

NP Completeness and Approximation Algorithms

NP Completeness and Approximation Algorithms Chapter 10 NP Completeness and Approximation Algorithms Let C() be a class of problems defined by some property. We are interested in characterizing the hardest problems in the class, so that if we can

More information

Chapter 7. Markov chain background. 7.1 Finite state space

Chapter 7. Markov chain background. 7.1 Finite state space Chapter 7 Markov chain background A stochastic process is a family of random variables {X t } indexed by a varaible t which we will think of as time. Time can be discrete or continuous. We will only consider

More information

NP and Computational Intractability

NP and Computational Intractability NP and Computational Intractability 1 Polynomial-Time Reduction Desiderata'. Suppose we could solve X in polynomial-time. What else could we solve in polynomial time? don't confuse with reduces from Reduction.

More information

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t 2.2 Filtrations Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of σ algebras {F t } such that F t F and F t F t+1 for all t = 0, 1,.... In continuous time, the second condition

More information

Computational Complexity and Intractability: An Introduction to the Theory of NP. Chapter 9

Computational Complexity and Intractability: An Introduction to the Theory of NP. Chapter 9 1 Computational Complexity and Intractability: An Introduction to the Theory of NP Chapter 9 2 Objectives Classify problems as tractable or intractable Define decision problems Define the class P Define

More information

Markov chain Monte Carlo algorithms

Markov chain Monte Carlo algorithms M ONTE C ARLO M ETHODS Rapidly Mixing Markov Chains with Applications in Computer Science and Physics Monte Carlo algorithms often depend on Markov chains to sample from very large data sets A key ingredient

More information

Lecture 10. Sublinear Time Algorithms (contd) CSC2420 Allan Borodin & Nisarg Shah 1

Lecture 10. Sublinear Time Algorithms (contd) CSC2420 Allan Borodin & Nisarg Shah 1 Lecture 10 Sublinear Time Algorithms (contd) CSC2420 Allan Borodin & Nisarg Shah 1 Recap Sublinear time algorithms Deterministic + exact: binary search Deterministic + inexact: estimating diameter in a

More information

NP Completeness and Approximation Algorithms

NP Completeness and Approximation Algorithms Winter School on Optimization Techniques December 15-20, 2016 Organized by ACMU, ISI and IEEE CEDA NP Completeness and Approximation Algorithms Susmita Sur-Kolay Advanced Computing and Microelectronic

More information