26.1 Metropolis method
|
|
- Anis Caldwell
- 6 years ago
- Views:
Transcription
1 CS880: Approximations Algorithms Scribe: Dave Anrzejewski Lecturer: Shuchi Chawla Topic: Metropolis metho, volume estimation Date: 4/26/07 The previous lecture iscusse they some of the key concepts of Markov Chain Monte Carlo (MCMC) methos, incluing the stationary istribution π an the mixing time τ ǫ. This lecture introuces the Metropolis metho for constructing Markov chains in orer to sample from some istribution. The use of sampling methos for volume estimation is also introuce. 26. Metropolis metho 26.. MCMC review Recall from last time the key properties of a ranom walk Markov chain. Ω = the state space n = Ω P = the transition matrix, P ij = Pr[i j] π = the stationary istribution such that π P = π τ ǫ = the mixing time, after which the l -norm of the ifference between the chain istribution an the stationary istribution is guarantee to be < ǫ. Also recall this important theorem concerning the existence an uniqueness of the stationary istribution π. Theorem 26.. An aperioic irreucible finite Markov chain is ergoic an has a unique stationary istribution. We can easily guarantee the aperioicity of our chain by simply aing self-loops to all vertices. This will increase the mixing time by no more than a factor of Metropolis filter But how o we actually construct a Markov chain with a stationary istribution equal to our target istribution? Also, we want this metho to have a goo (that is, small) mixing time. The Metropolis metho allows us achieve these goals by efining our Markov chain as a ranom walk over a suitably efine graph. We efine the approach as follows. Say we which to sample values i Ω from a istribution Q(i). Then we efine an unirecte -regular graph G on Ω, picking this graph in such a way that it has high conuctance. Then from noe v, pick the next noe u uniformly from the neighbors. Then:
2 If Q(u) Q(v), move to noe u Else move to noe u with probability Q(u) Q(u) Q(v), stay with probability ( Q(v) ). First we examine the graph itself. Since it is fully connecte an unirecte, it is irreucible. Since all noes have self-eges, it is aperioic. Therefore this ranom walk is guarantee to have a unique stationary istribution π. Now we must show that this stationary istribution is equal to our target istribution Q. Claim π = Q Proof: Say that our initial π = Q, then take one step. Consier any noe v, an calculate the probability of arriving at noe v after this one step. If it is equal to Q(v), then we have shown that QP = Q, an therefore π = Q. We nee to calculate the probability of starting at istribution Q, taking one step, an then ening up in state v. This can be ecompose into three cases: we move from a neighbor u into v where Q(u) Q(v), we move from a neighbor u into v where Q(u) < Q(v), or we are alreay in v an we choose a neighbor u such that Q(u) < Q(v) but we en up staying at v. Let n be the number of neighbors u such that Q(u) Q(v). Q (v) = Q(u) Q(v) Q(u)Q(v) Q(u) + Q(u)<Q(v) Q(u) + Q(u)<Q(v) Q(u) Q(v)( Q(v) ) (26..) = n Q(v) + n Q(u) + n Q(v) n Q(u) (26..2) = Q(v) (26..3) This shows that a ranom walk using the Metropolis metho is guarantee to converge to our target istribution Q. It is worth noting that our scheme of uniformly choosing a neighbor is a special case of the general Metropolis-Hastings sampler [3]. In the more general case, a proposal istribution is use to select the next caniate state conitione on the current state. This proposal istribution nee not be uniform over neighbors, an in fact nee not even be symmetric Volume estimation An interesting application of sampling techniques is the problem of estimating the volume of a convex shape K R n using an inclusion oracle which reveals whether a given point is containe in the shape or not. We are also given two balls, one completely enclosing K an one completely enclose by K. Call these K B(0,R) an K B(0,r). This technique that we use has interesting parallels to the concept of self-reucibility. What is the probability that a uniformly chosen point in the larger ball will be in K? We can use the smaller ball to boun this probability as vol(b(0,r)) vol(b(0,r)). However, for large n we will suffer the curse of imensionality [4], an this lower boun will be very small, in particular ( r R )n. 2
3 The key insight of our approach is that we can use a sample from a series of regions K i in orer to make the probability of fining a point in the region as large as possible. K K_i+ R r Figure 26..: An example of our volume estimation problem setup. K = K o K... K l = B(0,R) (26..4) These regions are constructe such that vol(k i+) vol(k i ) is small for all i. Furthermore we choose a small l n log( R r ), so we o not nee to iterate too many times. We can estimate the volume of K using estimates of the ratios of ajacent K i in the following way. vol(k) = vol(k o) vol(k ) vol(k ) vol(k 2 )...vol(k l ) vol(b(0,r)) (26..5) vol(k l ) So now we simply nee to figure out how to sample from K i. We efine K i as a re-scaling of K, subject to containment by B(0,R). This gives us a nice boun on the volume ratio of ajacent K i. K i = ( + /n) i K B(0,R) (26..6) vol(k i+ ) vol(k i ) ( + /n) n = e (26..7) 3
4 Note that ( + n )n log R/r K contains B(0,r), therefore e is inee O(n log R/r). To sample from K i, we then simply sample uniformly from K an then re-scale. But how to sample from K itself? To approach this problem, we employ the MCMC methos we have been iscussing. We efine our ranom walk, known as the Ball-walk, as follows. From any point u K, sample a point ranomly from the ball centere at u with raius δ, B(u,δ), an move to the new point if it is insie K. If the point is outsie K, stay at u. Note that the graph efine by this rule allows us to reach any point from any other point, an also allows self-loops. Therefore it is irreucible an aperioic, an must have a unique stationary istribution π. The resulting Markov chain is time-reversible. Therefore, the stationary istribution is uniform. For the practicality of this scheme, it is important to choose a goo value for δ in orer to get goo samples from K. Taken to the extreme, a huge δ value woul result in constantly picking points outsie K, an therefore remaining at the current point. Likewise, a very small δ woul result in taking very small steps, making it very slow to explore all of K. Also, if something is known about the geometry of K, it may be helpful to rescale the proposal ball to an ellipse, for example. This is accomplishe by putting the boy in an isotropic position via an affice transformation, so as to remove all sharp corners. Figure 26..2: Rescaling the proposal ball to an ellipse base on the geometry of K. The first approach base on this technique was polynomial in n, but with an unfortunate orer O(n 23 ) []. Newer approaches, ubbe hit an run, first choose a irection, an then sample uniformly from the line segment along that irection containe in K. This approach rastically improves mixing time, achieving Õ(n4 ) [2]. Figure 26..3: The hit an run technique. 4
5 The inapproximability result is that one cannot estimate volume within a constant factor in Ω(n 2 ) time. References [] Martin Dyer, Alan Frieze, Ravi Kannan. A ranom polynomial-time algorithm for approximating the volume of convex boies. JACM 99. [2] Laszlo Lovasz, Santosh Vempala. Simulate Annealing in Convex Boies an an O(n 4 ) Volume Algorithm FOCS [3] D. MacKay. Information Theory, Inference, an Learning Algorithms. Cambrige University Press, [4] Trevor Hastie, Robert Tibshirani, Jerome Frieman. The Elements of Statistical Learning: Data Mining, Inference, an Preiction. Springer-Verlag, 200. [5] V. Vazirani. Approximation Algorithms. Springer,
25.1 Markov Chain Monte Carlo (MCMC)
CS880: Approximations Algorithms Scribe: Dave Andrzejewski Lecturer: Shuchi Chawla Topic: Approx counting/sampling, MCMC methods Date: 4/4/07 The previous lecture showed that, for self-reducible problems,
More informationLecture 16: October 29
CS294 Markov Chain Monte Carlo: Foundations & Applications Fall 2009 Lecture 16: October 29 Lecturer: Alistair Sinclair Scribes: Disclaimer: These notes have not been subjected to the usual scrutiny reserved
More informationThe Lovasz-Vempala algorithm for computing the volume of a convex body in O*(n^4) - Theory and Implementation
The Lovasz-Vempala algorithm for computing the volume of a convex body in O*(n^4) - Theory and Implementation Mittagsseminar, 20. Dec. 2011 Christian L. Müller, MOSAIC group, Institute of Theoretical Computer
More informationLecture 5. Symmetric Shearer s Lemma
Stanfor University Spring 208 Math 233: Non-constructive methos in combinatorics Instructor: Jan Vonrák Lecture ate: January 23, 208 Original scribe: Erik Bates Lecture 5 Symmetric Shearer s Lemma Here
More informationAlgorithms and matching lower bounds for approximately-convex optimization
Algorithms an matching lower bouns for approximately-convex optimization Yuanzhi Li Department of Computer Science Princeton University Princeton, NJ, 08450 yuanzhil@cs.princeton.eu Anrej Risteski Department
More informationensembles When working with density operators, we can use this connection to define a generalized Bloch vector: v x Tr x, v y Tr y
Ph195a lecture notes, 1/3/01 Density operators for spin- 1 ensembles So far in our iscussion of spin- 1 systems, we have restricte our attention to the case of pure states an Hamiltonian evolution. Toay
More informationModel Counting for Logical Theories
Model Counting for Logical Theories Wednesday Dmitry Chistikov Rayna Dimitrova Department of Computer Science University of Oxford, UK Max Planck Institute for Software Systems (MPI-SWS) Kaiserslautern
More informationA Course in Machine Learning
A Course in Machine Learning Hal Daumé III 12 EFFICIENT LEARNING So far, our focus has been on moels of learning an basic algorithms for those moels. We have not place much emphasis on how to learn quickly.
More informationIntroduction to Machine Learning CMU-10701
Introduction to Machine Learning CMU-10701 Markov Chain Monte Carlo Methods Barnabás Póczos & Aarti Singh Contents Markov Chain Monte Carlo Methods Goal & Motivation Sampling Rejection Importance Markov
More informationProof of SPNs as Mixture of Trees
A Proof of SPNs as Mixture of Trees Theorem 1. If T is an inuce SPN from a complete an ecomposable SPN S, then T is a tree that is complete an ecomposable. Proof. Argue by contraiction that T is not a
More informationApproximate Constraint Satisfaction Requires Large LP Relaxations
Approximate Constraint Satisfaction Requires Large LP Relaxations oah Fleming April 19, 2018 Linear programming is a very powerful tool for attacking optimization problems. Techniques such as the ellipsoi
More informationCollapsed Gibbs and Variational Methods for LDA. Example Collapsed MoG Sampling
Case Stuy : Document Retrieval Collapse Gibbs an Variational Methos for LDA Machine Learning/Statistics for Big Data CSE599C/STAT59, University of Washington Emily Fox 0 Emily Fox February 7 th, 0 Example
More informationLecture Introduction. 2 Examples of Measure Concentration. 3 The Johnson-Lindenstrauss Lemma. CS-621 Theory Gems November 28, 2012
CS-6 Theory Gems November 8, 0 Lecture Lecturer: Alesaner Mąry Scribes: Alhussein Fawzi, Dorina Thanou Introuction Toay, we will briefly iscuss an important technique in probability theory measure concentration
More informationConvex Optimization CMU-10725
Convex Optimization CMU-10725 Simulated Annealing Barnabás Póczos & Ryan Tibshirani Andrey Markov Markov Chains 2 Markov Chains Markov chain: Homogen Markov chain: 3 Markov Chains Assume that the state
More informationMarkov Chains and MCMC
Markov Chains and MCMC CompSci 590.02 Instructor: AshwinMachanavajjhala Lecture 4 : 590.02 Spring 13 1 Recap: Monte Carlo Method If U is a universe of items, and G is a subset satisfying some property,
More informationExpected Value of Partial Perfect Information
Expecte Value of Partial Perfect Information Mike Giles 1, Takashi Goa 2, Howar Thom 3 Wei Fang 1, Zhenru Wang 1 1 Mathematical Institute, University of Oxfor 2 School of Engineering, University of Tokyo
More informationA Review of Multiple Try MCMC algorithms for Signal Processing
A Review of Multiple Try MCMC algorithms for Signal Processing Luca Martino Image Processing Lab., Universitat e València (Spain) Universia Carlos III e Mari, Leganes (Spain) Abstract Many applications
More informationUC Berkeley Department of Electrical Engineering and Computer Science Department of Statistics
UC Berkeley Department of Electrical Engineering an Computer Science Department of Statistics EECS 8B / STAT 4B Avance Topics in Statistical Learning Theory Solutions 3 Spring 9 Solution 3. For parti,
More informationApproximate Counting and Markov Chain Monte Carlo
Approximate Counting and Markov Chain Monte Carlo A Randomized Approach Arindam Pal Department of Computer Science and Engineering Indian Institute of Technology Delhi March 18, 2011 April 8, 2011 Arindam
More informationLower bounds on Locality Sensitive Hashing
Lower bouns on Locality Sensitive Hashing Rajeev Motwani Assaf Naor Rina Panigrahy Abstract Given a metric space (X, X ), c 1, r > 0, an p, q [0, 1], a istribution over mappings H : X N is calle a (r,
More informationLecture 22. Lecturer: Michel X. Goemans Scribe: Alantha Newman (2004), Ankur Moitra (2009)
8.438 Avance Combinatorial Optimization Lecture Lecturer: Michel X. Goemans Scribe: Alantha Newman (004), Ankur Moitra (009) MultiFlows an Disjoint Paths Here we will survey a number of variants of isjoint
More informationQuantum Mechanics in Three Dimensions
Physics 342 Lecture 20 Quantum Mechanics in Three Dimensions Lecture 20 Physics 342 Quantum Mechanics I Monay, March 24th, 2008 We begin our spherical solutions with the simplest possible case zero potential.
More informationStochastic optimization Markov Chain Monte Carlo
Stochastic optimization Markov Chain Monte Carlo Ethan Fetaya Weizmann Institute of Science 1 Motivation Markov chains Stationary distribution Mixing time 2 Algorithms Metropolis-Hastings Simulated Annealing
More informationLog-concave sampling: Metropolis-Hastings algorithms are fast!
Proceedings of Machine Learning Research vol 75:1 5, 2018 31st Annual Conference on Learning Theory Log-concave sampling: Metropolis-Hastings algorithms are fast! Raaz Dwivedi Department of Electrical
More informationLDA Collapsed Gibbs Sampler, VariaNonal Inference. Task 3: Mixed Membership Models. Case Study 5: Mixed Membership Modeling
Case Stuy 5: Mixe Membership Moeling LDA Collapse Gibbs Sampler, VariaNonal Inference Machine Learning for Big Data CSE547/STAT548, University of Washington Emily Fox May 8 th, 05 Emily Fox 05 Task : Mixe
More informationA Sketch of Menshikov s Theorem
A Sketch of Menshikov s Theorem Thomas Bao March 14, 2010 Abstract Let Λ be an infinite, locally finite oriente multi-graph with C Λ finite an strongly connecte, an let p
More informationLecture XII. where Φ is called the potential function. Let us introduce spherical coordinates defined through the relations
Lecture XII Abstract We introuce the Laplace equation in spherical coorinates an apply the metho of separation of variables to solve it. This will generate three linear orinary secon orer ifferential equations:
More informationarxiv: v1 [math.co] 29 May 2009
arxiv:0905.4913v1 [math.co] 29 May 2009 simple Havel-Hakimi type algorithm to realize graphical egree sequences of irecte graphs Péter L. Erős an István Miklós. Rényi Institute of Mathematics, Hungarian
More informationApril 20th, Advanced Topics in Machine Learning California Institute of Technology. Markov Chain Monte Carlo for Machine Learning
for for Advanced Topics in California Institute of Technology April 20th, 2017 1 / 50 Table of Contents for 1 2 3 4 2 / 50 History of methods for Enrico Fermi used to calculate incredibly accurate predictions
More information17 : Markov Chain Monte Carlo
10-708: Probabilistic Graphical Models, Spring 2015 17 : Markov Chain Monte Carlo Lecturer: Eric P. Xing Scribes: Heran Lin, Bin Deng, Yun Huang 1 Review of Monte Carlo Methods 1.1 Overview Monte Carlo
More informationLecture 6 : Dimensionality Reduction
CPS290: Algorithmic Founations of Data Science February 3, 207 Lecture 6 : Dimensionality Reuction Lecturer: Kamesh Munagala Scribe: Kamesh Munagala In this lecture, we will consier the roblem of maing
More informationarxiv: v2 [cs.ds] 11 May 2016
Optimizing Star-Convex Functions Jasper C.H. Lee Paul Valiant arxiv:5.04466v2 [cs.ds] May 206 Department of Computer Science Brown University {jasperchlee,paul_valiant}@brown.eu May 3, 206 Abstract We
More informationSeparation of Variables
Physics 342 Lecture 1 Separation of Variables Lecture 1 Physics 342 Quantum Mechanics I Monay, January 25th, 2010 There are three basic mathematical tools we nee, an then we can begin working on the physical
More informationMachine Learning for Data Science (CS4786) Lecture 24
Machine Learning for Data Science (CS4786) Lecture 24 Graphical Models: Approximate Inference Course Webpage : http://www.cs.cornell.edu/courses/cs4786/2016sp/ BELIEF PROPAGATION OR MESSAGE PASSING Each
More informationResults: MCMC Dancers, q=10, n=500
Motivation Sampling Methods for Bayesian Inference How to track many INTERACTING targets? A Tutorial Frank Dellaert Results: MCMC Dancers, q=10, n=500 1 Probabilistic Topological Maps Results Real-Time
More information18 : Advanced topics in MCMC. 1 Gibbs Sampling (Continued from the last lecture)
10-708: Probabilistic Graphical Models 10-708, Spring 2014 18 : Advanced topics in MCMC Lecturer: Eric P. Xing Scribes: Jessica Chemali, Seungwhan Moon 1 Gibbs Sampling (Continued from the last lecture)
More informationChapter 4. Electrostatics of Macroscopic Media
Chapter 4. Electrostatics of Macroscopic Meia 4.1 Multipole Expansion Approximate potentials at large istances 3 x' x' (x') x x' x x Fig 4.1 We consier the potential in the far-fiel region (see Fig. 4.1
More informationPDE Notes, Lecture #11
PDE Notes, Lecture # from Professor Jalal Shatah s Lectures Febuary 9th, 2009 Sobolev Spaces Recall that for u L loc we can efine the weak erivative Du by Du, φ := udφ φ C0 If v L loc such that Du, φ =
More informationConductance and Rapidly Mixing Markov Chains
Conductance and Rapidly Mixing Markov Chains Jamie King james.king@uwaterloo.ca March 26, 2003 Abstract Conductance is a measure of a Markov chain that quantifies its tendency to circulate around its states.
More information1 Lecture 20: Implicit differentiation
Lecture 20: Implicit ifferentiation. Outline The technique of implicit ifferentiation Tangent lines to a circle Derivatives of inverse functions by implicit ifferentiation Examples.2 Implicit ifferentiation
More informationIterated Point-Line Configurations Grow Doubly-Exponentially
Iterate Point-Line Configurations Grow Doubly-Exponentially Joshua Cooper an Mark Walters July 9, 008 Abstract Begin with a set of four points in the real plane in general position. A to this collection
More informationThe chromatic number of graph powers
Combinatorics, Probability an Computing (19XX) 00, 000 000. c 19XX Cambrige University Press Printe in the Unite Kingom The chromatic number of graph powers N O G A A L O N 1 an B O J A N M O H A R 1 Department
More informationGroup Importance Sampling for particle filtering and MCMC
Group Importance Sampling for particle filtering an MCMC Luca Martino, Víctor Elvira, Gustau Camps-Valls Image Processing Laboratory, Universitat e València (Spain). IMT Lille Douai CRISTAL (UMR 989),
More informationPH 132 Exam 1 Spring Student Name. Student Number. Lab/Recitation Section Number (11,,36)
PH 13 Exam 1 Spring 010 Stuent Name Stuent Number ab/ecitation Section Number (11,,36) Instructions: 1. Fill out all of the information requeste above. Write your name on each page.. Clearly inicate your
More informationConvergence Rate of Markov Chains
Convergence Rate of Markov Chains Will Perkins April 16, 2013 Convergence Last class we saw that if X n is an irreducible, aperiodic, positive recurrent Markov chain, then there exists a stationary distribution
More information6 Markov Chain Monte Carlo (MCMC)
6 Markov Chain Monte Carlo (MCMC) The underlying idea in MCMC is to replace the iid samples of basic MC methods, with dependent samples from an ergodic Markov chain, whose limiting (stationary) distribution
More informationError Floors in LDPC Codes: Fast Simulation, Bounds and Hardware Emulation
Error Floors in LDPC Coes: Fast Simulation, Bouns an Harware Emulation Pamela Lee, Lara Dolecek, Zhengya Zhang, Venkat Anantharam, Borivoje Nikolic, an Martin J. Wainwright EECS Department University of
More informationIntroduction to Machine Learning CMU-10701
Introduction to Machine Learning CMU-10701 Markov Chain Monte Carlo Methods Barnabás Póczos Contents Markov Chain Monte Carlo Methods Sampling Rejection Importance Hastings-Metropolis Gibbs Markov Chains
More informationLeast-Squares Regression on Sparse Spaces
Least-Squares Regression on Sparse Spaces Yuri Grinberg, Mahi Milani Far, Joelle Pineau School of Computer Science McGill University Montreal, Canaa {ygrinb,mmilan1,jpineau}@cs.mcgill.ca 1 Introuction
More informationConvergence of Random Walks
Chapter 16 Convergence of Ranom Walks This lecture examines the convergence of ranom walks to the Wiener process. This is very important both physically an statistically, an illustrates the utility of
More information16 : Markov Chain Monte Carlo (MCMC)
10-708: Probabilistic Graphical Models 10-708, Spring 2014 16 : Markov Chain Monte Carlo MCMC Lecturer: Matthew Gormley Scribes: Yining Wang, Renato Negrinho 1 Sampling from low-dimensional distributions
More informationMonte Carlo Methods. Geoff Gordon February 9, 2006
Monte Carlo Methods Geoff Gordon ggordon@cs.cmu.edu February 9, 2006 Numerical integration problem 5 4 3 f(x,y) 2 1 1 0 0.5 0 X 0.5 1 1 0.8 0.6 0.4 Y 0.2 0 0.2 0.4 0.6 0.8 1 x X f(x)dx Used for: function
More informationMarkov Chain Monte Carlo, Numerical Integration
Markov Chain Monte Carlo, Numerical Integration (See Statistics) Trevor Gallen Fall 2015 1 / 1 Agenda Numerical Integration: MCMC methods Estimating Markov Chains Estimating latent variables 2 / 1 Numerical
More informationDiophantine Approximations: Examining the Farey Process and its Method on Producing Best Approximations
Diophantine Approximations: Examining the Farey Process an its Metho on Proucing Best Approximations Kelly Bowen Introuction When a person hears the phrase irrational number, one oes not think of anything
More informationCalculus 4 Final Exam Review / Winter 2009
Calculus 4 Final Eam Review / Winter 9 (.) Set-up an iterate triple integral for the volume of the soli enclose between the surfaces: 4 an 4. DO NOT EVALUATE THE INTEGRAL! [Hint: The graphs of both surfaces
More information15.1 Upper bound via Sudakov minorization
ECE598: Information-theoretic methos in high-imensional statistics Spring 206 Lecture 5: Suakov, Maurey, an uality of metric entropy Lecturer: Yihong Wu Scribe: Aolin Xu, Mar 7, 206 [E. Mar 24] In this
More information16 : Approximate Inference: Markov Chain Monte Carlo
10-708: Probabilistic Graphical Models 10-708, Spring 2017 16 : Approximate Inference: Markov Chain Monte Carlo Lecturer: Eric P. Xing Scribes: Yuan Yang, Chao-Ming Yen 1 Introduction As the target distribution
More informationSampling Methods (11/30/04)
CS281A/Stat241A: Statistical Learning Theory Sampling Methods (11/30/04) Lecturer: Michael I. Jordan Scribe: Jaspal S. Sandhu 1 Gibbs Sampling Figure 1: Undirected and directed graphs, respectively, with
More informationOptimization of a Convex Program with a Polynomial Perturbation
Optimization of a Convex Program with a Polynomial Perturbation Ravi Kannan Microsoft Research Bangalore Luis Rademacher College of Computing Georgia Institute of Technology Abstract We consider the problem
More informationMCMC and Gibbs Sampling. Kayhan Batmanghelich
MCMC and Gibbs Sampling Kayhan Batmanghelich 1 Approaches to inference l Exact inference algorithms l l l The elimination algorithm Message-passing algorithm (sum-product, belief propagation) The junction
More information12.11 Laplace s Equation in Cylindrical and
SEC. 2. Laplace s Equation in Cylinrical an Spherical Coorinates. Potential 593 2. Laplace s Equation in Cylinrical an Spherical Coorinates. Potential One of the most important PDEs in physics an engineering
More informationYVES F. ATCHADÉ, GARETH O. ROBERTS, AND JEFFREY S. ROSENTHAL
TOWARDS OPTIMAL SCALING OF METROPOLIS-COUPLED MARKOV CHAIN MONTE CARLO YVES F. ATCHADÉ, GARETH O. ROBERTS, AND JEFFREY S. ROSENTHAL Abstract. We consier optimal temperature spacings for Metropolis-couple
More informationAnalysis on a Localized Pruning Method for Connected Dominating Sets
JOURNAL OF INFORMATION SCIENCE AND ENGINEERING 23, 1073-1086 (2007) Analysis on a Localize Pruning Metho for Connecte Dominating Sets JOHN SUM 1, JIE WU 2 AND KEVIN HO 3 1 Department of Information Management
More informationLower Bounds for Local Monotonicity Reconstruction from Transitive-Closure Spanners
Lower Bouns for Local Monotonicity Reconstruction from Transitive-Closure Spanners Arnab Bhattacharyya Elena Grigorescu Mahav Jha Kyomin Jung Sofya Raskhonikova Davi P. Wooruff Abstract Given a irecte
More informationCh5. Markov Chain Monte Carlo
ST4231, Semester I, 2003-2004 Ch5. Markov Chain Monte Carlo In general, it is very difficult to simulate the value of a random vector X whose component random variables are dependent. In this chapter we
More informationLecture 2: September 8
CS294 Markov Chain Monte Carlo: Foundations & Applications Fall 2009 Lecture 2: September 8 Lecturer: Prof. Alistair Sinclair Scribes: Anand Bhaskar and Anindya De Disclaimer: These notes have not been
More informationSimulated Annealing for Constrained Global Optimization
Monte Carlo Methods for Computation and Optimization Final Presentation Simulated Annealing for Constrained Global Optimization H. Edwin Romeijn & Robert L.Smith (1994) Presented by Ariel Schwartz Objective
More informationLecture 21: Counting and Sampling Problems
princeton univ. F 14 cos 521: Advanced Algorithm Design Lecture 21: Counting and Sampling Problems Lecturer: Sanjeev Arora Scribe: Today s topic of counting and sampling problems is motivated by computational
More informationWEIGHTING A RESAMPLED PARTICLE IN SEQUENTIAL MONTE CARLO. L. Martino, V. Elvira, F. Louzada
WEIGHTIG A RESAMPLED PARTICLE I SEQUETIAL MOTE CARLO L. Martino, V. Elvira, F. Louzaa Dep. of Signal Theory an Communic., Universia Carlos III e Mari, Leganés (Spain). Institute of Mathematical Sciences
More informationd dx But have you ever seen a derivation of these results? We ll prove the first result below. cos h 1
Lecture 5 Some ifferentiation rules Trigonometric functions (Relevant section from Stewart, Seventh Eition: Section 3.3) You all know that sin = cos cos = sin. () But have you ever seen a erivation of
More informationCSC 446 Notes: Lecture 13
CSC 446 Notes: Lecture 3 The Problem We have already studied how to calculate the probability of a variable or variables using the message passing method. However, there are some times when the structure
More informationMarkov chain Monte Carlo Lecture 9
Markov chain Monte Carlo Lecture 9 David Sontag New York University Slides adapted from Eric Xing and Qirong Ho (CMU) Limitations of Monte Carlo Direct (unconditional) sampling Hard to get rare events
More informationAdmin BACKPROPAGATION. Neural network. Neural network 11/3/16. Assignment 7. Assignment 8 Goals today. David Kauchak CS158 Fall 2016
Amin Assignment 7 Assignment 8 Goals toay BACKPROPAGATION Davi Kauchak CS58 Fall 206 Neural network Neural network inputs inputs some inputs are provie/ entere Iniviual perceptrons/ neurons Neural network
More informationNumerical Integration Monte Carlo style
Numerical Integration Monte Carlo style Mark Huber Dept. of Mathematics and Inst. of Statistics and Decision Sciences Duke University mhuber@math.duke.edu www.math.duke.edu/~mhuber 1 Integration is hard
More informationarxiv: v2 [math.pr] 27 Nov 2018
Range an spee of rotor wals on trees arxiv:15.57v [math.pr] 7 Nov 1 Wilfrie Huss an Ecaterina Sava-Huss November, 1 Abstract We prove a law of large numbers for the range of rotor wals with ranom initial
More informationOn colour-blind distinguishing colour pallets in regular graphs
J Comb Optim (2014 28:348 357 DOI 10.1007/s10878-012-9556-x On colour-blin istinguishing colour pallets in regular graphs Jakub Przybyło Publishe online: 25 October 2012 The Author(s 2012. This article
More informationDynamical Systems and a Brief Introduction to Ergodic Theory
Dynamical Systems an a Brief Introuction to Ergoic Theory Leo Baran Spring 2014 Abstract This paper explores ynamical systems of ifferent types an orers, culminating in an examination of the properties
More informationReview of Differentiation and Integration for Ordinary Differential Equations
Schreyer Fall 208 Review of Differentiation an Integration for Orinary Differential Equations In this course you will be expecte to be able to ifferentiate an integrate quickly an accurately. Many stuents
More informationOn Markov Chain Monte Carlo
MCMC 0 On Markov Chain Monte Carlo Yevgeniy Kovchegov Oregon State University MCMC 1 Metropolis-Hastings algorithm. Goal: simulating an Ω-valued random variable distributed according to a given probability
More informationUniversity of Chicago Autumn 2003 CS Markov Chain Monte Carlo Methods. Lecture 7: November 11, 2003 Estimating the permanent Eric Vigoda
University of Chicago Autumn 2003 CS37101-1 Markov Chain Monte Carlo Methods Lecture 7: November 11, 2003 Estimating the permanent Eric Vigoda We refer the reader to Jerrum s book [1] for the analysis
More informationMCMC and Gibbs Sampling. Sargur Srihari
MCMC and Gibbs Sampling Sargur srihari@cedar.buffalo.edu 1 Topics 1. Markov Chain Monte Carlo 2. Markov Chains 3. Gibbs Sampling 4. Basic Metropolis Algorithm 5. Metropolis-Hastings Algorithm 6. Slice
More informationOn the number of isolated eigenvalues of a pair of particles in a quantum wire
On the number of isolate eigenvalues of a pair of particles in a quantum wire arxiv:1812.11804v1 [math-ph] 31 Dec 2018 Joachim Kerner 1 Department of Mathematics an Computer Science FernUniversität in
More informationPhysics 505 Electricity and Magnetism Fall 2003 Prof. G. Raithel. Problem Set 3. 2 (x x ) 2 + (y y ) 2 + (z + z ) 2
Physics 505 Electricity an Magnetism Fall 003 Prof. G. Raithel Problem Set 3 Problem.7 5 Points a): Green s function: Using cartesian coorinates x = (x, y, z), it is G(x, x ) = 1 (x x ) + (y y ) + (z z
More informationLecture 7 and 8: Markov Chain Monte Carlo
Lecture 7 and 8: Markov Chain Monte Carlo 4F13: Machine Learning Zoubin Ghahramani and Carl Edward Rasmussen Department of Engineering University of Cambridge http://mlg.eng.cam.ac.uk/teaching/4f13/ Ghahramani
More informationThe Press-Schechter mass function
The Press-Schechter mass function To state the obvious: It is important to relate our theories to what we can observe. We have looke at linear perturbation theory, an we have consiere a simple moel for
More informationCHAPTER 4. INTEGRATION 68. Previously, we chose an antiderivative which is correct for the given integrand 1/x 2. However, 6= 1 dx x x 2 if x =0.
CHAPTER 4. INTEGRATION 68 Previously, we chose an antierivative which is correct for the given integran /. However, recall 6 if 0. That is F 0 () f() oesn t hol for apple apple. We have to be sure the
More informationSurvey Sampling. 1 Design-based Inference. Kosuke Imai Department of Politics, Princeton University. February 19, 2013
Survey Sampling Kosuke Imai Department of Politics, Princeton University February 19, 2013 Survey sampling is one of the most commonly use ata collection methos for social scientists. We begin by escribing
More informationRange and speed of rotor walks on trees
Range an spee of rotor wals on trees Wilfrie Huss an Ecaterina Sava-Huss May 15, 1 Abstract We prove a law of large numbers for the range of rotor wals with ranom initial configuration on regular trees
More informationConvergence of random variables, and the Borel-Cantelli lemmas
Stat 205A Setember, 12, 2002 Convergence of ranom variables, an the Borel-Cantelli lemmas Lecturer: James W. Pitman Scribes: Jin Kim (jin@eecs) 1 Convergence of ranom variables Recall that, given a sequence
More informationLecture 2: Correlated Topic Model
Probabilistic Moels for Unsupervise Learning Spring 203 Lecture 2: Correlate Topic Moel Inference for Correlate Topic Moel Yuan Yuan First of all, let us make some claims about the parameters an variables
More information28 : Approximate Inference - Distributed MCMC
10-708: Probabilistic Graphical Models, Spring 2015 28 : Approximate Inference - Distributed MCMC Lecturer: Avinava Dubey Scribes: Hakim Sidahmed, Aman Gupta 1 Introduction For many interesting problems,
More informationThe Markov Chain Monte Carlo Method
The Markov Chain Monte Carlo Method Idea: define an ergodic Markov chain whose stationary distribution is the desired probability distribution. Let X 0, X 1, X 2,..., X n be the run of the chain. The Markov
More informationLinear and quadratic approximation
Linear an quaratic approximation November 11, 2013 Definition: Suppose f is a function that is ifferentiable on an interval I containing the point a. The linear approximation to f at a is the linear function
More informationPrep 1. Oregon State University PH 213 Spring Term Suggested finish date: Monday, April 9
Oregon State University PH 213 Spring Term 2018 Prep 1 Suggeste finish ate: Monay, April 9 The formats (type, length, scope) of these Prep problems have been purposely create to closely parallel those
More informationMarkov Chain Monte Carlo
Markov Chain Monte Carlo Recall: To compute the expectation E ( h(y ) ) we use the approximation E(h(Y )) 1 n n h(y ) t=1 with Y (1),..., Y (n) h(y). Thus our aim is to sample Y (1),..., Y (n) from f(y).
More information05 The Continuum Limit and the Wave Equation
Utah State University DigitalCommons@USU Founations of Wave Phenomena Physics, Department of 1-1-2004 05 The Continuum Limit an the Wave Equation Charles G. Torre Department of Physics, Utah State University,
More informationAgmon Kolmogorov Inequalities on l 2 (Z d )
Journal of Mathematics Research; Vol. 6, No. ; 04 ISSN 96-9795 E-ISSN 96-9809 Publishe by Canaian Center of Science an Eucation Agmon Kolmogorov Inequalities on l (Z ) Arman Sahovic Mathematics Department,
More informationChromatic number for a generalization of Cartesian product graphs
Chromatic number for a generalization of Cartesian prouct graphs Daniel Král Douglas B. West Abstract Let G be a class of graphs. The -fol gri over G, enote G, is the family of graphs obtaine from -imensional
More information16.30/31, Fall 2010 Recitation # 1
6./, Fall Recitation # September, In this recitation we consiere the following problem. Given a plant with open-loop transfer function.569s +.5 G p (s) = s +.7s +.97, esign a feeback control system such
More informationLecture 19: November 10
CS294 Markov Chain Monte Carlo: Foundations & Applications Fall 2009 Lecture 19: November 10 Lecturer: Prof. Alistair Sinclair Scribes: Kevin Dick and Tanya Gordeeva Disclaimer: These notes have not been
More information