The Monte Carlo Method
|
|
- Betty Richard
- 5 years ago
- Views:
Transcription
1 The Monte Carlo Method Example: estimate the value of π. Choose X and Y independently and uniformly at random in [0, 1]. Let Pr(Z = 1) = π 4. 4E[Z] = π. { 1 if X Z = 2 + Y 2 1, 0 otherwise,
2 Let Z 1,..., Z m be the values of m independent experiments. W = m i=1 Z i. [ m ] m E[W ] = E Z i = E[Z i ] = mπ 4, i=1 i=1 W = 4 mw is a natural estimate for π. Pr( W π ɛπ) = ( Pr W mπ 4 ɛmπ ) 4 = Pr ( W E[W ] ɛe[w ]) 2e 1 12 mπɛ2.(chernoff bound, Cor. 4.6)
3 (ɛ, δ)-approximation Definition A randomized algorithm gives an (ɛ, δ)-approximation for the value V if the output X of the algorithm satisfies Pr( X V ɛv ) 1 δ. The method for approximating π gives an (ɛ, δ)-approximation as long as ɛ < 1 and m is large enough to make 2e mπɛ2 /12 δ so we need m 12 ln (2/δ) πɛ 2
4 Theorem Let X 1,..., X m be independent and identically distributed indicator random variables, with µ = E[X i ]. If m 3 ln 2 δ ɛ 2 µ, then ( 1 Pr m ) m X i µ ɛµ δ. i=1 That is, m samples provide an (ɛ, δ)-approximation for µ.
5 Approximate Counting Example counting problems: 1 How many spanning trees in a graph? 2 How many perfect matchings in a graph?
6 DNF Counting DNF = Disjunctive Normal Form. Problem: How many satisfying assignments to a DNF formula? A DNF formula is a disjunction of clauses. Each clause is a conjunction of literals. (x 1 x 2 ) (x 2 x 3 ) (x 1 x 2 x 3 x 4 ) (x 3 x 4 ) Compare to CNF. (x 1 x 2 ) (x 1 x 3 ) m clauses, n variables Let s first convince ourselves that obvious approaches don t work!
7 DNF counting is hard Question: Why? We can reduce CNF satisfiability to DNF counting. The negation of a CNF formula is in DNF. 1 CNF formula f 2 get the DNF formula ( f ) 3 count satisfying assignments to f 4 This number is 2 n if and only if f is unsatisfiable.
8 DNF counting is #P complete #P is the counting analog of NP. Any problem in #P can be reduced (in polynomial time) to the DNF counting problem. Example #P complete problems: 1 How many Hamilton circuits does a graph have? 2 How many satisfying assignments does a CNF formula have? 3 How many perfect matchings in a graph? What can we do about a hard problem?
9 (ɛ, δ) FPRAS for DNF counting FPRAS = Fully Polynomial Randomized Approximation Scheme Notation: U: set of all possible assignments to variables U = 2 n. H U: set of satisfying assignments Want to estimate Y = H Give ɛ > 0, δ > 0, find estimate X such that 1 Pr[ X Y > ɛy ] < δ 2 Algorithm should be polynomial in 1/ɛ, 1/δ, n and m.
10 Monte Carlo method Here s the obvious scheme (Algorithm 1, page 256 in book). 1. Repeat N times: 1.1. Sample x randomly from U, that is, generate one of the 2 n possible assignments uniformly at random Count a success if x H (formula satisfied by x) 2. Return fraction of successes U. Question: How large should N be? We have to evaluate the probability of our estimate being good.
11 Let ρ = H U. Let the indicator random variable Z i = 1 if i-th trial was successful. Then { 1 with probability ρ Z i = 0 with probability 1 ρ Z = N Z i is a binomial random variable whose expected value is i=1 E[Z] = Nρ X = Z U is our estimate of H N
12 Probability that our algorithm succeeds Recall: X denotes our estimate of H. Pr[(1 ɛ) H < X < (1 + ɛ) H ] = Pr[(1 ɛ) H < Z U /N < (1 + ɛ) H ] = Pr[(1 ɛ)nρ < Z < (1 + ɛ)nρ] > 1 e Nρɛ2 /3 e Nρɛ2 /2 > 1 2e Nρɛ2 /3 where we have used Chernoff bounds. For an (ɛ, δ) approximation, this has to be greater than 1 δ, 2e Nρɛ2 /3 < δ N > 3 ρɛ 2 log 2 δ
13 Theorem Let ρ = H / U. Then the Monte Carlo method is an (ɛ, δ) approximation scheme for estimating H provided that N > 3 ρɛ 2 log 2 δ.
14 What s wrong? How large could 1 ρ be? ρ is the fraction of satisfying assignments. 1 The number of possible assignments is 2 n. 2 Maybe there are only a polynomial (in n) number of satisfying assignments. 3 So, 1 ρ could be exponential in n. Question: An example where formula has only a few assignments?
15 The trick: Skewed sampling Increase the hit rate (ρ)! Sample from a different universe, ρ is higher, and all elements of H still represented. What s the new universe? Notation: H i set of assignments that satisfy clause i. H = H 1 H 2... H m Define a new universe U = H 1 H2... Hm means multiset union.
16 Example - Partition by clauses (x 1 x 2 ) (x 2 x 3 ) (x 1 x 2 x 3 x 4 ) (x 3 x 4 ) x 1 x 2 x 3 x 4 Clause
17 More about the universe U 1 U contains only the satisfying assignments. 2 U is a multiset (contains the same element many times). 3 Element of U is (v, i) where v is an assignment, i is the satisfied clause. U = {(v, i) v H i } 4 Each satisfying assignment v appears in as many clauses as it satisfies.
18 One way of looking at U Partition by clauses. m partitions, partition i contains H i.
19 Another way of looking at U Partition by assignments (one region for each assignment v). Each partition corresponds to an assignment. Can we count the different (distinct) assignments?
20 Example - Partition by assignments (x 1 x 2 ) (x 2 x 3 ) (x 1 x 2 x 3 x 4 ) (x 3 x 4 ) x 1 x 2 x 3 x 4 Clause
21 Canonical element Crucial idea: For each assignment group, find a canonical element in U. An element (v, i) is canonical if f ((v, i)) = 1 { 1 if i = min{j : v H j } f ((v, i)) = 0 otherwise For every assignment group, exactly one canonical element. So, count the number of canonical elements! Note: could use any other definition as long as exactly one canonical element per assignment
22 Count canonical elements Reiterating: 1 Number of satisfying assignments = Number of canonical elements. 2 Count number of canonical elements. 3 Back to old random sampling method for counting!
23 What is ρ? Lemma ρ 1, (pretty large). m Proof. H = m i=1 H i, since H is a normal union. So H i H Recall U = H 1 H2... Hm U = m i=1 H i, since U is a multiset union. U m H ρ = H U 1 m
24 How to generate a random element in U? Look at the partition of U by clauses. Algorithm Select: 1 Pick a random clause weighted according to the area it occupies. Pr[i] = H i U = H i m 1 H j H i = 2 (n k i ) where k i is the number of literals in clause i. 2 Choose a random satisfying assignment in H i. Fix the variables required by clause i. Assign random values to the rest to get v (v, i) is the random element. Running time: O(n).
25 How to test if canonical assignment? Or how to evaluate f ((v, i))? Algorithm Test: 1 Test every clause to see if v satisfies it. cov(v) = {(v, j) v H j } 2 If (v, i) the smallest in cov(v), then f (v, i) = 1, else 0. Running time: O(nm).
26 Algorithm Coverage: Back to random sampling 1 s 0 (number of successes) 2 Repeat N times: Select (v, i) using Select. if f (v, i) = 1 (check using Test) then success, increment s. 3 Return s U /N. Number of samples needed is (from Theorem 3): N = 3 ɛ 2 ρ ln 2 δ 3m ɛ 2 ln 2 δ Sampling, testing: polynomial in n and m We have an FPRAS Theorem The Coverage algorithm yields an (ɛ, δ) approximation to H provided that the number of samples N 3m ɛ 2 log 2 δ.
27 Counting Independent Sets Input: a graph G = (V, E). V = n, E = m. Let e 1,..., e m be an arbitrary ordering of the edges. G i = (V, E i ), where E i = {e 1,..., e i } G = G m, G 0 = (V, ) and G i 1 is obtained from G i be removing a single edge. Ω(G i ) = the set of independent sets in G i. Ω(G) = Ω(G m) Ω(G m 1 ) Ω(G m 1) Ω(G m 2 ) Ω(G m 2) Ω(G m 3 ) Ω(G 1) Ω(G 0 ) Ω(G 0). r i = Ω(G i), i = 1,..., m. Ω(G i 1 )
28 Algorithm Estimating r i Input: Graphs G i 1 = (V, E i 1 ) and G i = (V, E i ). Output: r i = an approximation of r i. 1 X 0. 2 Repeat for M = 1296m 2 ɛ 2 ln 2m δ independent trials: 1 Generate an uniform sample from Ω(G i 1 ); 2 If the sample is an independent set in G i, let X X Return r i X M.
29 Lemma r i 1/2. Proof. Ω(G i ) Ω(G i 1 ). Suppose that G i 1 and G i differ in the edge {u, v}. An independent set in Ω(G i 1 ) \ Ω(G i ) contains both u and v. To bound the size of the set Ω(G i 1 ) \ Ω(G i ), we associate each I Ω(G i 1 ) \ Ω(G i ) with an independent set I \ {v} Ω(G i ). An independent set I Ω(G i ) is associated with no more than one independent set I {v} Ω(G i 1 ) \ Ω(G i ), and thus Ω(G i 1 ) \ Ω(G i ) Ω(G i ). It follows that r i = Ω(G i) Ω(G i 1 ) = Ω(G i ) Ω(G i ) + Ω(G i 1 ) \ Ω(G i ) 1/2.
30 Lemma When m 1 and 0 < ɛ 1, the procedure for estimating r i yields an estimate r i that is (ɛ/2m, δ/m)-approximation for r i. Our estimate is 2 n m i=1 r i The true number is Ω(G) = 2 n m i=1 r i. To evaluate the error in our estimate we need to bound the ratio m r i R =. r i i=1
31 Lemma Suppose that for all i, 1 i m, r i is an (ɛ/2m, δ/m)-approximation for r i. Then Pr( R 1 ɛ) 1 δ. Proof: For each 1 i m, we have ( Pr r i r i ɛ ) 2m r i 1 δ m. Equivalently, ( Pr r i r i > ɛ ) 2m r i < δ m.
32 By the union bound the probability that r i r i > ɛ 2m r i for any i is at most δ, and hence r i r i ɛ 2m r i for all i with probability at least 1 δ. Equivalently, 1 ɛ 2m r i 1 + ɛ r i 2m holds for all i with probability at least 1 δ. When these bounds hold for all i, we can combine them to obtain 1 ɛ ( 1 ɛ ) m m r ( i 1 + ɛ ) m (1 + ɛ), 2m r i 2m i=1
33 Estimating r i Input: Graphs G i 1 = (V, E i 1 ) and G i = (V, E i ). Output: r i = an approximation of r i. 1 X 0. 2 Repeat for M = 1296m 2 ɛ 2 ln 2m δ independent trials: 1 Generate an uniform sample from Ω(G i 1 ); 2 If the sample is an independent set in G i, let X X Return r i X M.
34 Definition Let w be the (random) output of a sampling algorithm for a finite sample space Ω. The sampling algorithm generates an ɛ-uniform sample of Ω if, for any subset S of Ω, S Pr(w S) Ω ɛ. A sampling algorithm is a fully polynomial almost uniform sampler (FPAUS) for a problem if, given an input x and a parameter ɛ > 0, it generates an ɛ-uniform sample of Ω(x), and it runs in time polynomial in ln ɛ 1 and the size of the input x.
35 Estimating r i Input: Graphs G i 1 = (V, E i 1 ) and G i = (V, E i ). Output: r i = an approximation of r i. 1 X 0. 2 Repeat for M = 1296m 2 ɛ 2 ln 2m δ independent trials: 1 Generate an ɛ 6m -uniform sample from Ω(G i 1); 2 If the sample is an independent set in G i, let X X Return r i X M. Lemma When m 1 and 0 < ɛ 1, the procedure for estimating r i yields an (ɛ/2m, δ/m)-approximation for r i
36 How do we Generate an ɛ 6m -uniform sample from Ω(G i 1)?
37 From Approximate Sampling to Approximate Counting Theorem Given a fully polynomial almost uniform sampler (FPAUS) for independent sets in any graph, we can construct a fully polynomial randomized approximation scheme (FPRAS) for the number of independent sets in a graph G with maximum degree at most.
38 The Markov Chain Monte Carlo Method Idea: define an ergodic Markov chain whose stationary distribution is the desired probability distribution. Let X 0, X 1, X 2,..., X n be the run of the chain. The Markov chain converges to its stationary distribution from any starting state X 0 so after some sufficiently large number r of steps, the distribution at of the state X r will be close to the stationary distribution π of the Markov chain. Now, repeating with X r as the starting point we can use X 2r as a sample etc. So X r, X 2r, x 3r,... can be used as almost independent samples from π.
39 N(x) set of neighbors of x. Let M max x Ω N(x). Lemma Consider a Markov chain where for all x and y with y x, P x,y = 1 M if y N(x), and P x,y = 0 otherwise. Also, P x,x = 1 N(x) M. If this chain is irreducible and aperiodic, then the stationary distribution is the uniform distribution. Proof. We show that the chain is time-reversible, and apply Theorem For any x y, if π x = π y, then π x P x,y = π y P y,x, since P x,y = P y,x = 1/M. It follows that the uniform distribution π x = 1/ Ω is the stationary distribution.
40 Sampling a uniform distribution on the independent sets Consider a Markov chain whose states are independent sets in a graph G = (V, E): 1 X 0 is an arbitrary independent set in G. 2 To compute X i+1 : 1 Choose a vertex v uniformly at random from V. 2 If v X i then X i+1 = X i \ {v}; 3 if v X i, and adding v to X i still gives an independent set, then X i+1 = X i {v}; 4 otherwise, X i+1 = X i. The chain is irreducible The chain is aperiodic (as G has at least one edge) For y x, P x,y = 1/ V or 0. The lemma implies that the stationary distribution is the uniform distribution.
41 The Metropolis Algorithm Assuming that we want to sample with non-uniform distribution. For example, we want the probability of an independent set of size i to be proportional to λ i. Consider a Markov chain on independent sets in G = (V, E): 1 X 0 is an arbitrary independent set in G. 2 To compute X i+1 : 1 Choose a vertex v uniformly at random from V. 2 If v X i then set X i+1 = X i \ {v} with probability min(1, 1/λ); 3 if v X i, and adding v to X i still gives an independent set, then set X i+1 = X i {v} with probability min(1, λ); 4 otherwise, set X i+1 = X i.
42 Lemma For a finite state space Ω, let M max x Ω N(x). For all x Ω, let π x > 0 be the desired probability of state x in the stationary distribution. Consider a Markov chain where for all x and y with y x, P x,y = 1 ( M min 1, π ) y π x if y N(x), and P x,y = 0 otherwise. Further, P x,x = 1 y x P x,y. Then if this chain is irreducible and aperiodic, the stationary distribution is given by the probabilities π x.
43 Proof. We show the chain is time-reversible. For any x y, if π x π y, then P x,y = 1 and P y,x = π x /π y. It follows that π x P x,y = π y P y,x. Similarly, if π x > π y, then P x,y = π y /π x and P y,x = 1, and it follows that π x P x,y = π y P y,x. Note that the Metropolis Algorithm only needs the ratios π x /π y s. In our construction, the probability of an independent set of size i is λ i /B for B = x λsize(x) although we may not know B.
Approximate Counting and Markov Chain Monte Carlo
Approximate Counting and Markov Chain Monte Carlo A Randomized Approach Arindam Pal Department of Computer Science and Engineering Indian Institute of Technology Delhi March 18, 2011 April 8, 2011 Arindam
More informationApproximate Counting
Approximate Counting Andreas-Nikolas Göbel National Technical University of Athens, Greece Advanced Algorithms, May 2010 Outline 1 The Monte Carlo Method Introduction DNFSAT Counting 2 The Markov Chain
More informationThe Markov Chain Monte Carlo Method
The Markov Chain Monte Carlo Method Idea: define an ergodic Markov chain whose stationary distribution is the desired probability distribution. Let X 0, X 1, X 2,..., X n be the run of the chain. The Markov
More informationAn Algorithmist s Toolkit September 24, Lecture 5
8.49 An Algorithmist s Toolkit September 24, 29 Lecture 5 Lecturer: Jonathan Kelner Scribe: Shaunak Kishore Administrivia Two additional resources on approximating the permanent Jerrum and Sinclair s original
More informationHomework 10 Solution
CS 174: Combinatorics and Discrete Probability Fall 2012 Homewor 10 Solution Problem 1. (Exercise 10.6 from MU 8 points) The problem of counting the number of solutions to a napsac instance can be defined
More informationHomework 4 Solutions
CS 174: Combinatorics and Discrete Probability Fall 01 Homework 4 Solutions Problem 1. (Exercise 3.4 from MU 5 points) Recall the randomized algorithm discussed in class for finding the median of a set
More informationAn Introduction to Randomized algorithms
An Introduction to Randomized algorithms C.R. Subramanian The Institute of Mathematical Sciences, Chennai. Expository talk presented at the Research Promotion Workshop on Introduction to Geometric and
More informationTopic: Sampling, Medians of Means method and DNF counting Date: October 6, 2004 Scribe: Florin Oprea
15-859(M): Randomized Algorithms Lecturer: Shuchi Chawla Topic: Sampling, Medians of Means method and DNF counting Date: October 6, 200 Scribe: Florin Oprea 8.1 Introduction In this lecture we will consider
More informationFinding Satisfying Assignments by Random Walk
Ferienakademie, Sarntal 2010 Finding Satisfying Assignments by Random Walk Rolf Wanka, Erlangen Overview Preliminaries A Randomized Polynomial-time Algorithm for 2-SAT A Randomized O(2 n )-time Algorithm
More informationLecture 14: October 22
CS294 Markov Chain Monte Carlo: Foundations & Applications Fall 2009 Lecture 14: October 22 Lecturer: Alistair Sinclair Scribes: Alistair Sinclair Disclaimer: These notes have not been subjected to the
More informationMarkov Chains and MCMC
Markov Chains and MCMC CompSci 590.02 Instructor: AshwinMachanavajjhala Lecture 4 : 590.02 Spring 13 1 Recap: Monte Carlo Method If U is a universe of items, and G is a subset satisfying some property,
More informationUniversity of Chicago Autumn 2003 CS Markov Chain Monte Carlo Methods. Lecture 7: November 11, 2003 Estimating the permanent Eric Vigoda
University of Chicago Autumn 2003 CS37101-1 Markov Chain Monte Carlo Methods Lecture 7: November 11, 2003 Estimating the permanent Eric Vigoda We refer the reader to Jerrum s book [1] for the analysis
More informationLecture 3: AC 0, the switching lemma
Lecture 3: AC 0, the switching lemma Topics in Complexity Theory and Pseudorandomness (Spring 2013) Rutgers University Swastik Kopparty Scribes: Meng Li, Abdul Basit 1 Pseudorandom sets We start by proving
More informationCounting and sampling paths in graphs
Rochester Institute of Technology RIT Scholar Works Theses Thesis/Dissertation Collections 2008 Counting and sampling paths in graphs T. Ryan Hoens Follow this and additional works at: http://scholarworks.rit.edu/theses
More information7.1 Coupling from the Past
Georgia Tech Fall 2006 Markov Chain Monte Carlo Methods Lecture 7: September 12, 2006 Coupling from the Past Eric Vigoda 7.1 Coupling from the Past 7.1.1 Introduction We saw in the last lecture how Markov
More information20.1 2SAT. CS125 Lecture 20 Fall 2016
CS125 Lecture 20 Fall 2016 20.1 2SAT We show yet another possible way to solve the 2SAT problem. Recall that the input to 2SAT is a logical expression that is the conunction (AND) of a set of clauses,
More informationComputational Learning Theory. Definitions
Computational Learning Theory Computational learning theory is interested in theoretical analyses of the following issues. What is needed to learn effectively? Sample complexity. How many examples? Computational
More informationLecture 1: Introduction: Equivalence of Counting and Sampling
Counting and Sampling Fall 2017 Lecture 1: Introduction: Equivalence of Counting and Sampling Lecturer: Shayan Oveis Gharan Sept 27 Disclaimer: These notes have not been subjected to the usual scrutiny
More informationLecture 5: Random Walks and Markov Chain
Spectral Graph Theory and Applications WS 20/202 Lecture 5: Random Walks and Markov Chain Lecturer: Thomas Sauerwald & He Sun Introduction to Markov Chains Definition 5.. A sequence of random variables
More informationMonte Carlo importance sampling and Markov chain
Monte Carlo importance sampling and Markov chain If a configuration in phase space is denoted by X, the probability for configuration according to Boltzman is ρ(x) e βe(x) β = 1 T (1) How to sample over
More informationUniversity of Chicago Autumn 2003 CS Markov Chain Monte Carlo Methods
University of Chicago Autumn 2003 CS37101-1 Markov Chain Monte Carlo Methods Lecture 4: October 21, 2003 Bounding the mixing time via coupling Eric Vigoda 4.1 Introduction In this lecture we ll use the
More informationProbability & Computing
Probability & Computing Stochastic Process time t {X t t 2 T } state space Ω X t 2 state x 2 discrete time: T is countable T = {0,, 2,...} discrete space: Ω is finite or countably infinite X 0,X,X 2,...
More informationCSC 2429 Approaches to the P vs. NP Question and Related Complexity Questions Lecture 2: Switching Lemma, AC 0 Circuit Lower Bounds
CSC 2429 Approaches to the P vs. NP Question and Related Complexity Questions Lecture 2: Switching Lemma, AC 0 Circuit Lower Bounds Lecturer: Toniann Pitassi Scribe: Robert Robere Winter 2014 1 Switching
More information1 Approximate Counting by Random Sampling
COMP8601: Advanced Topics in Theoretical Computer Science Lecture 5: More Measure Concentration: Counting DNF Satisfying Assignments, Hoeffding s Inequality Lecturer: Hubert Chan Date: 19 Sep 2013 These
More informationLecture 13 March 7, 2017
CS 224: Advanced Algorithms Spring 2017 Prof. Jelani Nelson Lecture 13 March 7, 2017 Scribe: Hongyao Ma Today PTAS/FPTAS/FPRAS examples PTAS: knapsack FPTAS: knapsack FPRAS: DNF counting Approximation
More information25.1 Markov Chain Monte Carlo (MCMC)
CS880: Approximations Algorithms Scribe: Dave Andrzejewski Lecturer: Shuchi Chawla Topic: Approx counting/sampling, MCMC methods Date: 4/4/07 The previous lecture showed that, for self-reducible problems,
More informationConvex Optimization CMU-10725
Convex Optimization CMU-10725 Simulated Annealing Barnabás Póczos & Ryan Tibshirani Andrey Markov Markov Chains 2 Markov Chains Markov chain: Homogen Markov chain: 3 Markov Chains Assume that the state
More informationCS 512, Spring 2017, Handout 10 Propositional Logic: Conjunctive Normal Forms, Disjunctive Normal Forms, Horn Formulas, and other special forms
CS 512, Spring 2017, Handout 10 Propositional Logic: Conjunctive Normal Forms, Disjunctive Normal Forms, Horn Formulas, and other special forms Assaf Kfoury 5 February 2017 Assaf Kfoury, CS 512, Spring
More informationOn the Structure and the Number of Prime Implicants of 2-CNFs
On the Structure and the Number of Prime Implicants of 2-CNFs Navid Talebanfard Department of Mathematical and Computing Sciences, Tokyo Institute of Technology, Meguro-ku Ookayama 2-12-1, Japan 152-8552
More information1 AC 0 and Håstad Switching Lemma
princeton university cos 522: computational complexity Lecture 19: Circuit complexity Lecturer: Sanjeev Arora Scribe:Tony Wirth Complexity theory s Waterloo As we saw in an earlier lecture, if PH Σ P 2
More informationIntroduction to Machine Learning CMU-10701
Introduction to Machine Learning CMU-10701 Markov Chain Monte Carlo Methods Barnabás Póczos & Aarti Singh Contents Markov Chain Monte Carlo Methods Goal & Motivation Sampling Rejection Importance Markov
More informationDisjointness and Additivity
Midterm 2: Format Midterm 2 Review CS70 Summer 2016 - Lecture 6D David Dinh 28 July 2016 UC Berkeley 8 questions, 190 points, 110 minutes (same as MT1). Two pages (one double-sided sheet) of handwritten
More informationCS261: A Second Course in Algorithms Lecture #18: Five Essential Tools for the Analysis of Randomized Algorithms
CS261: A Second Course in Algorithms Lecture #18: Five Essential Tools for the Analysis of Randomized Algorithms Tim Roughgarden March 3, 2016 1 Preamble In CS109 and CS161, you learned some tricks of
More informationMidterm 2 Review. CS70 Summer Lecture 6D. David Dinh 28 July UC Berkeley
Midterm 2 Review CS70 Summer 2016 - Lecture 6D David Dinh 28 July 2016 UC Berkeley Midterm 2: Format 8 questions, 190 points, 110 minutes (same as MT1). Two pages (one double-sided sheet) of handwritten
More informationRandomized Algorithms
Randomized Algorithms Prof. Tapio Elomaa tapio.elomaa@tut.fi Course Basics A new 4 credit unit course Part of Theoretical Computer Science courses at the Department of Mathematics There will be 4 hours
More informationMarkov chains. Randomness and Computation. Markov chains. Markov processes
Markov chains Randomness and Computation or, Randomized Algorithms Mary Cryan School of Informatics University of Edinburgh Definition (Definition 7) A discrete-time stochastic process on the state space
More informationCS5314 Randomized Algorithms. Lecture 18: Probabilistic Method (De-randomization, Sample-and-Modify)
CS5314 Randomized Algorithms Lecture 18: Probabilistic Method (De-randomization, Sample-and-Modify) 1 Introduce two topics: De-randomize by conditional expectation provides a deterministic way to construct
More informationApproximability of Dense Instances of Nearest Codeword Problem
Approximability of Dense Instances of Nearest Codeword Problem Cristina Bazgan 1, W. Fernandez de la Vega 2, Marek Karpinski 3 1 Université Paris Dauphine, LAMSADE, 75016 Paris, France, bazgan@lamsade.dauphine.fr
More informationEECS 495: Randomized Algorithms Lecture 14 Random Walks. j p ij = 1. Pr[X t+1 = j X 0 = i 0,..., X t 1 = i t 1, X t = i] = Pr[X t+
EECS 495: Randomized Algorithms Lecture 14 Random Walks Reading: Motwani-Raghavan Chapter 6 Powerful tool for sampling complicated distributions since use only local moves Given: to explore state space.
More informationRandom Variable. Pr(X = a) = Pr(s)
Random Variable Definition A random variable X on a sample space Ω is a real-valued function on Ω; that is, X : Ω R. A discrete random variable is a random variable that takes on only a finite or countably
More informationFoundations of Machine Learning and Data Science. Lecturer: Avrim Blum Lecture 9: October 7, 2015
10-806 Foundations of Machine Learning and Data Science Lecturer: Avrim Blum Lecture 9: October 7, 2015 1 Computational Hardness of Learning Today we will talk about some computational hardness results
More informationIntroduction to Computational Complexity
Introduction to Computational Complexity Tandy Warnow October 30, 2018 CS 173, Introduction to Computational Complexity Tandy Warnow Overview Topics: Solving problems using oracles Proving the answer to
More informationTecniche di Verifica. Introduction to Propositional Logic
Tecniche di Verifica Introduction to Propositional Logic 1 Logic A formal logic is defined by its syntax and semantics. Syntax An alphabet is a set of symbols. A finite sequence of these symbols is called
More informationPropositional Logic. Testing, Quality Assurance, and Maintenance Winter Prof. Arie Gurfinkel
Propositional Logic Testing, Quality Assurance, and Maintenance Winter 2018 Prof. Arie Gurfinkel References Chpater 1 of Logic for Computer Scientists http://www.springerlink.com/content/978-0-8176-4762-9/
More informationMarkov and Gibbs Random Fields
Markov and Gibbs Random Fields Bruno Galerne bruno.galerne@parisdescartes.fr MAP5, Université Paris Descartes Master MVA Cours Méthodes stochastiques pour l analyse d images Lundi 6 mars 2017 Outline The
More informationRandom Resolution Refutations
[1] Random Resolution Refutations Pavel Pudlák and Neil Thapen 1 Mathematical Institute, Academy of Sciences, Prague Riga, 6.7.17 1 the authors are supported by the ERC grant FEALORA [2] History and Motivation
More informationSolving Random Satisfiable 3CNF Formulas in Expected Polynomial Time
Solving Random Satisfiable 3CNF Formulas in Expected Polynomial Time Michael Krivelevich and Dan Vilenchik Tel-Aviv University Solving Random Satisfiable 3CNF Formulas in Expected Polynomial Time p. 1/2
More informationComp487/587 - Boolean Formulas
Comp487/587 - Boolean Formulas 1 Logic and SAT 1.1 What is a Boolean Formula Logic is a way through which we can analyze and reason about simple or complicated events. In particular, we are interested
More informationPCPs and Inapproximability Gap-producing and Gap-Preserving Reductions. My T. Thai
PCPs and Inapproximability Gap-producing and Gap-Preserving Reductions My T. Thai 1 1 Hardness of Approximation Consider a maximization problem Π such as MAX-E3SAT. To show that it is NP-hard to approximation
More informationStat 516, Homework 1
Stat 516, Homework 1 Due date: October 7 1. Consider an urn with n distinct balls numbered 1,..., n. We sample balls from the urn with replacement. Let N be the number of draws until we encounter a ball
More informationComplexity, P and NP
Complexity, P and NP EECS 477 Lecture 21, 11/26/2002 Last week Lower bound arguments Information theoretic (12.2) Decision trees (sorting) Adversary arguments (12.3) Maximum of an array Graph connectivity
More informationLecture 21: Counting and Sampling Problems
princeton univ. F 14 cos 521: Advanced Algorithm Design Lecture 21: Counting and Sampling Problems Lecturer: Sanjeev Arora Scribe: Today s topic of counting and sampling problems is motivated by computational
More informationPropp-Wilson Algorithm (and sampling the Ising model)
Propp-Wilson Algorithm (and sampling the Ising model) Danny Leshem, Nov 2009 References: Haggstrom, O. (2002) Finite Markov Chains and Algorithmic Applications, ch. 10-11 Propp, J. & Wilson, D. (1996)
More informationStatistical Learning Learning From Examples
Statistical Learning Learning From Examples We want to estimate the working temperature range of an iphone. We could study the physics and chemistry that affect the performance of the phone too hard We
More informationEssential facts about NP-completeness:
CMPSCI611: NP Completeness Lecture 17 Essential facts about NP-completeness: Any NP-complete problem can be solved by a simple, but exponentially slow algorithm. We don t have polynomial-time solutions
More informationINPUT INDEPENDENCE AND COMPUTATIONAL COMPLEXITY
INPUT INDEPENDENCE AND COMPUTATIONAL COMPLEXITY KOJI KOBAYASHI Abstract. This paper describes about complexity of PH problems by using problem input independence, and provide new approach to solve P vs
More informationStochastic optimization Markov Chain Monte Carlo
Stochastic optimization Markov Chain Monte Carlo Ethan Fetaya Weizmann Institute of Science 1 Motivation Markov chains Stationary distribution Mixing time 2 Algorithms Metropolis-Hastings Simulated Annealing
More informationBranching. Teppo Niinimäki. Helsinki October 14, 2011 Seminar: Exact Exponential Algorithms UNIVERSITY OF HELSINKI Department of Computer Science
Branching Teppo Niinimäki Helsinki October 14, 2011 Seminar: Exact Exponential Algorithms UNIVERSITY OF HELSINKI Department of Computer Science 1 For a large number of important computational problems
More informationNormal Forms of Propositional Logic
Normal Forms of Propositional Logic Bow-Yaw Wang Institute of Information Science Academia Sinica, Taiwan September 12, 2017 Bow-Yaw Wang (Academia Sinica) Normal Forms of Propositional Logic September
More informationDiscrete solid-on-solid models
Discrete solid-on-solid models University of Alberta 2018 COSy, University of Manitoba - June 7 Discrete processes, stochastic PDEs, deterministic PDEs Table: Deterministic PDEs Heat-diffusion equation
More informationMarkov Chain Monte Carlo Methods
Markov Chain Monte Carlo Methods p. /36 Markov Chain Monte Carlo Methods Michel Bierlaire michel.bierlaire@epfl.ch Transport and Mobility Laboratory Markov Chain Monte Carlo Methods p. 2/36 Markov Chains
More informationEssentials on the Analysis of Randomized Algorithms
Essentials on the Analysis of Randomized Algorithms Dimitris Diochnos Feb 0, 2009 Abstract These notes were written with Monte Carlo algorithms primarily in mind. Topics covered are basic (discrete) random
More informationSome Definition and Example of Markov Chain
Some Definition and Example of Markov Chain Bowen Dai The Ohio State University April 5 th 2016 Introduction Definition and Notation Simple example of Markov Chain Aim Have some taste of Markov Chain and
More information1 More finite deterministic automata
CS 125 Section #6 Finite automata October 18, 2016 1 More finite deterministic automata Exercise. Consider the following game with two players: Repeatedly flip a coin. On heads, player 1 gets a point.
More information6 Markov Chain Monte Carlo (MCMC)
6 Markov Chain Monte Carlo (MCMC) The underlying idea in MCMC is to replace the iid samples of basic MC methods, with dependent samples from an ergodic Markov chain, whose limiting (stationary) distribution
More informationConvergence Rate of Markov Chains
Convergence Rate of Markov Chains Will Perkins April 16, 2013 Convergence Last class we saw that if X n is an irreducible, aperiodic, positive recurrent Markov chain, then there exists a stationary distribution
More informationResolution Lower Bounds for the Weak Pigeonhole Principle
Electronic Colloquium on Computational Complexity, Report No. 21 (2001) Resolution Lower Bounds for the Weak Pigeonhole Principle Ran Raz Weizmann Institute, and The Institute for Advanced Study ranraz@wisdom.weizmann.ac.il
More informationLecture 9: Counting Matchings
Counting and Sampling Fall 207 Lecture 9: Counting Matchings Lecturer: Shayan Oveis Gharan October 20 Disclaimer: These notes have not been subjected to the usual scrutiny reserved for formal publications.
More informationA Collection of Problems in Propositional Logic
A Collection of Problems in Propositional Logic Hans Kleine Büning SS 2016 Problem 1: SAT (respectively SAT) Instance: A propositional formula α (for SAT in CNF). Question: Is α satisfiable? The problems
More informationComputational Learning Theory
CS 446 Machine Learning Fall 2016 OCT 11, 2016 Computational Learning Theory Professor: Dan Roth Scribe: Ben Zhou, C. Cervantes 1 PAC Learning We want to develop a theory to relate the probability of successful
More information8. INTRACTABILITY I. Lecture slides by Kevin Wayne Copyright 2005 Pearson-Addison Wesley. Last updated on 2/6/18 2:16 AM
8. INTRACTABILITY I poly-time reductions packing and covering problems constraint satisfaction problems sequencing problems partitioning problems graph coloring numerical problems Lecture slides by Kevin
More informationLecture 24 Nov. 20, 2014
CS 224: Advanced Algorithms Fall 2014 Prof. Jelani Nelson Lecture 24 Nov. 20, 2014 Scribe: Xiaoyu He Overview Today we will move to a new topic: another way to deal with NP-hard problems. We have already
More information6.842 Randomness and Computation February 24, Lecture 6
6.8 Randomness and Computation February, Lecture 6 Lecturer: Ronitt Rubinfeld Scribe: Mutaamba Maasha Outline Random Walks Markov Chains Stationary Distributions Hitting, Cover, Commute times Markov Chains
More informationLecture 3: September 10
CS294 Markov Chain Monte Carlo: Foundations & Applications Fall 2009 Lecture 3: September 10 Lecturer: Prof. Alistair Sinclair Scribes: Andrew H. Chan, Piyush Srivastava Disclaimer: These notes have not
More informationCPSC 536N: Randomized Algorithms Term 2. Lecture 9
CPSC 536N: Randomized Algorithms 2011-12 Term 2 Prof. Nick Harvey Lecture 9 University of British Columbia 1 Polynomial Identity Testing In the first lecture we discussed the problem of testing equality
More informationIntroduction. Pvs.NPExample
Introduction Computer Science & Engineering 423/823 Design and Analysis of Algorithms Lecture 09 NP-Completeness (Chapter 34) Stephen Scott (Adapted from Vinodchandran N. Variyam) sscott@cse.unl.edu I
More informationAn Algorithmic Proof of the Lopsided Lovász Local Lemma (simplified and condensed into lecture notes)
An Algorithmic Proof of the Lopsided Lovász Local Lemma (simplified and condensed into lecture notes) Nicholas J. A. Harvey University of British Columbia Vancouver, Canada nickhar@cs.ubc.ca Jan Vondrák
More informationApplications of the Lopsided Lovász Local Lemma Regarding Hypergraphs
Regarding Hypergraphs Ph.D. Dissertation Defense April 15, 2013 Overview The Local Lemmata 2-Coloring Hypergraphs with the Original Local Lemma Counting Derangements with the Lopsided Local Lemma Lopsided
More informationCoupling. 2/3/2010 and 2/5/2010
Coupling 2/3/2010 and 2/5/2010 1 Introduction Consider the move to middle shuffle where a card from the top is placed uniformly at random at a position in the deck. It is easy to see that this Markov Chain
More informationA brief introduction to Logic. (slides from
A brief introduction to Logic (slides from http://www.decision-procedures.org/) 1 A Brief Introduction to Logic - Outline Propositional Logic :Syntax Propositional Logic :Semantics Satisfiability and validity
More informationLecture 5: The Principle of Deferred Decisions. Chernoff Bounds
Randomized Algorithms Lecture 5: The Principle of Deferred Decisions. Chernoff Bounds Sotiris Nikoletseas Associate Professor CEID - ETY Course 2013-2014 Sotiris Nikoletseas, Associate Professor Randomized
More informationLecture 7. We can regard (p(i, j)) as defining a (maybe infinite) matrix P. Then a basic fact is
MARKOV CHAINS What I will talk about in class is pretty close to Durrett Chapter 5 sections 1-5. We stick to the countable state case, except where otherwise mentioned. Lecture 7. We can regard (p(i, j))
More informationContents. Typical techniques. Proving hardness. Constructing efficient algorithms
Contents Typical techniques Proving hardness Constructing efficient algorithms Generating Maximal Independent Sets Consider the generation of all maximal sets of an independence system. Generating Maximal
More informationMidterm: CS 6375 Spring 2015 Solutions
Midterm: CS 6375 Spring 2015 Solutions The exam is closed book. You are allowed a one-page cheat sheet. Answer the questions in the spaces provided on the question sheets. If you run out of room for an
More information12.1 A Polynomial Bound on the Sample Size m for PAC Learning
67577 Intro. to Machine Learning Fall semester, 2008/9 Lecture 12: PAC III Lecturer: Amnon Shashua Scribe: Amnon Shashua 1 In this lecture will use the measure of VC dimension, which is a combinatorial
More informationClasses of Boolean Functions
Classes of Boolean Functions Nader H. Bshouty Eyal Kushilevitz Abstract Here we give classes of Boolean functions that considered in COLT. Classes of Functions Here we introduce the basic classes of functions
More informationU.C. Berkeley Better-than-Worst-Case Analysis Handout 3 Luca Trevisan May 24, 2018
U.C. Berkeley Better-than-Worst-Case Analysis Handout 3 Luca Trevisan May 24, 2018 Lecture 3 In which we show how to find a planted clique in a random graph. 1 Finding a Planted Clique We will analyze
More informationRandom walks, Markov chains, and how to analyse them
Chapter 12 Random walks, Markov chains, and how to analyse them Today we study random walks on graphs. When the graph is allowed to be directed and weighted, such a walk is also called a Markov Chain.
More informationOut-colourings of Digraphs
Out-colourings of Digraphs N. Alon J. Bang-Jensen S. Bessy July 13, 2017 Abstract We study vertex colourings of digraphs so that no out-neighbourhood is monochromatic and call such a colouring an out-colouring.
More informationNP-Completeness. f(n) \ n n sec sec sec. n sec 24.3 sec 5.2 mins. 2 n sec 17.9 mins 35.
NP-Completeness Reference: Computers and Intractability: A Guide to the Theory of NP-Completeness by Garey and Johnson, W.H. Freeman and Company, 1979. NP-Completeness 1 General Problems, Input Size and
More informationMarkov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains
Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time
More informationLecture #14: NP-Completeness (Chapter 34 Old Edition Chapter 36) Discussion here is from the old edition.
Lecture #14: 0.0.1 NP-Completeness (Chapter 34 Old Edition Chapter 36) Discussion here is from the old edition. 0.0.2 Preliminaries: Definition 1 n abstract problem Q is a binary relations on a set I of
More informationHardness of Approximation of Graph Partitioning into Balanced Complete Bipartite Subgraphs
Hardness of Approximation of Graph Partitioning into Balanced Complete Bipartite Subgraphs Hideaki OTSUKI Abstract For a graph G, a biclique edge partition S BP (G) is a collection of complete bipartite
More information9.1 HW (3+3+8 points) (Knapsack decision problem NP-complete)
Algorithms CMSC-27200 http://alg15.cs.uchicago.edu Homework set #9. Due Wednesday, March 11, 2015 Posted 3-5. Updated 3-6 12:15am. Problem 9.6 added 3-7 1:15pm. Read the homework instructions on the website.
More informationLecture 2: September 8
CS294 Markov Chain Monte Carlo: Foundations & Applications Fall 2009 Lecture 2: September 8 Lecturer: Prof. Alistair Sinclair Scribes: Anand Bhaskar and Anindya De Disclaimer: These notes have not been
More informationMonotone Submodular Maximization over a Matroid
Monotone Submodular Maximization over a Matroid Yuval Filmus January 31, 2013 Abstract In this talk, we survey some recent results on monotone submodular maximization over a matroid. The survey does not
More informationNP-Complete Reductions 2
x 1 x 1 x 2 x 2 x 3 x 3 x 4 x 4 12 22 32 CS 447 11 13 21 23 31 33 Algorithms NP-Complete Reductions 2 Prof. Gregory Provan Department of Computer Science University College Cork 1 Lecture Outline NP-Complete
More informationCounting Colorings on Cubic Graphs
Counting Colorings on Cubic Graphs Chihao Zhang The Chinese University of Hong Kong Joint work with Pinyan Lu (Shanghai U of Finance and Economics) Kuan Yang (Oxford University) Minshen Zhu (Purdue University)
More informationMarkov Chain Monte Carlo (MCMC)
Markov Chain Monte Carlo (MCMC Dependent Sampling Suppose we wish to sample from a density π, and we can evaluate π as a function but have no means to directly generate a sample. Rejection sampling can
More informationIntroduction Long transparent proofs The real PCP theorem. Real Number PCPs. Klaus Meer. Brandenburg University of Technology, Cottbus, Germany
Santaló s Summer School, Part 3, July, 2012 joint work with Martijn Baartse (work supported by DFG, GZ:ME 1424/7-1) Outline 1 Introduction 2 Long transparent proofs for NP R 3 The real PCP theorem First
More information