CE 530 Molecular Simulation
|
|
- Maud Owens
- 6 years ago
- Views:
Transcription
1 CE 530 Molecular Simulation Lecture 0 Simple Biasing Methods David A. Kofke Department of Chemical Engineering SUNY Buffalo kofke@eng.buffalo.edu
2 2 Review Monte Carlo simulation Markov chain to generate elements of ensemble with proper distribution Metropolis algorithm relies on microscopic reversibility ππ two parts to a Markov step generate move (underlying transition matrix) decide to accept move or keep original state = π π i ij j ji Determination of acceptance probabilities detailed analysis of forward and reverse moves we examined molecule displacement and volume-change s
3 3 Performance Measures How do we improve the performance of a MC simulation? characterization of performance means to improve performance Return to our consideration of a general Markov process fixed number of well defined states fully specified transition- matrix use our three-state prototype Performance measures rate of convergence variance in occupancies π π π Π π π π π π π
4 4 Rate of Convergence. What is the likely distribution of states after a run of finite length? Is it close to the limiting distribution? ( n) ( n) ( n) π π2 π3 ( n) (0) n ( n) ( n) ( n) ( n) ( n) ( n) = Π = 2 3 ( n) ( n) ( n) π3 π32 π 33 ( 0 0) ( ) π π π π π π π π Probability of being in state 3 after n moves, beginning in state We can apply similarity transforms to understand behavior of eigenvector equation ΠΦ = ΦΛ Π = ΦΛΦ n Π λ 0 0 eigenvalue matrix: Λ= 0 λ λ 3 eigenvector matrix Φ= φ φ 2 φ 3
5 5 Rate of Convergence. What is the likely distribution of states after a run of finite length? Is it close to the limiting distribution? ( n) ( n) ( n) π π2 π3 ( n) (0) n ( n) ( n) ( n) ( n) ( n) ( n) = Π = 2 3 ( n) ( n) ( n) π3 π32 π 33 ( 0 0) ( ) π π π π π π π π Probability of being in state 3 after n moves, beginning in state We can apply similarity transforms to understand behavior of eigenvector equation n ( )( ) ( ) ( ) Π = ΦΛΦ ΦΛΦ...(n times)... Φ =ΦΛ Φ Φ Λ Φ Φ n =ΦΛ Φ ΠΦ = ΦΛ Π = ΦΛΦ... n λ 0 0 n n Λ = 0 λ2 0 n 0 0 λ 3 n Π
6 6 Rate of Convergence 2. Likely distribution after finite run ( n) (0) n (0) n i = i Π = i ΦΛ Φ π π π n λ n n n Λ = 0 λ2 0 = 0 λ = n n n 0 0 λ λ3 Convergence rate determined by magnitude of other eigenvalues very close to unity indicates slow convergence
7 7 Occupancy Variance. Imagine repeating Markov sequence many times (Là ), each time taking a fixed number of steps, M tabulate histogram for each sequence; examine variances in occupancy fraction σ 2 i L k = ( k) ( p ) 2 i πi = ( k) i p = through propagation of error, the occupancy (co)variances sum to give the variances in the ensemble averages; e.g. (for a 2-state system) L ( k ) m i M ( k) ( k) ( pi )( pj ) σσ = π π i j i j k = L 2 = σ U U σ U σ UU σ σ we would like these to be small
8 8 Occupancy Variance 2. A formula for the occupancy (co)variance is known 2 2 i i isii Mσ = π + 2π variance Mσσ = ππ + πs + π s i j i j i ij j ji covariance S = ( I Π+Φ) Φ right-hand sides independent of M standard deviation decreases as / M
9 9 Example Performance Values Limiting distribution π = ( ) Inefficient Barker Most efficient Metropolis Π= Π= Π= Π= λ = ( ) λ = ( ) λ = ( 0 ) λ = ( ) Σ= Σ= Σ= Σ=
10 0 Example Performance Values Π= π = ( ) λ = ( ) Σ= Lots of movement à 2; 3 à 4 Little movement (,2) à (3,4) Limiting distribution Eigenvalues Covariance matrix
11 Heuristics to Improve Performance Keep the system moving minimize diagonal elements of matrix avoid repeated transitions among a few states Typical physical situations where convergence is poor large number of equivalent states with poor transitions between regions of them entangled polymers large number of low- states and a few high- states low-density associating systems Low, nonzero region High region Γ Bottleneck Phase space
12 2 Biasing the Underlying Markov Process Detailed balance for /acceptance Markov process πτ i ij min(, χ) = πjτ ji min(,/ χ) Often it happens that τ ij is small while χ is large (or vice-versa) even if product is of order unity, π ij will be small because of min() The underlying TPM can be adjusted (biased) to enhance movement among states bias can be removed in reverse, or acceptance require in general πτ j χ = πτ ji i ij ideally, χ will be unity (all s accepted) even for a large change rarely achieve this level of improvement requires coordination of forward and reverse moves
13 3 Example: Biased Insertion in GCMC Grand-canonical Monte Carlo (µvt) fluctuations in N require insertion/deletion s at high density, insertions may be rarely accepted τ ij is small for j a state having additional but non-overlapping molecule at high chemical potential, limiting distribution strongly favors additional molecules π e βµ N χ is large for (N+) state with no overlap apply biasing to improve acceptance first look at unbiased algorithm
14 4 Insertion/Deletion Trial Move. Specification Gives new configuration of same volume but different number of molecules Choose with equal : insertion : add a molecule to a randomly selected position deletion : remove a randomly selected molecule from the system Limiting distribution grand-canonical ensemble π N ( r ) = Ξ Λ N ( r ) U N dn e β + βµ d r N
15 5 Insertion/Deletion Trial Move 2. Analysis of Trial Probabilities Detailed specification of moves and and probabilities Event [reverse event] Select insertion [select deletion ] Place molecule at r N+ [delete molecule N+] Probability [reverse ] ½ [½ ] dr/v [/(N+)] Forward-step Reverse-step dr min(, χ) 2 V min(, ) 2 N + χ Accept move [accept move] min(,χ) [min(,/χ)] χ is formulated to satisfy detailed balance
16 6 Insertion/Deletion Trial Move 3. Analysis of Detailed Balance Forward-step dr min(, χ) 2 V Reverse-step min(, ) 2 N + χ Detailed balance π i π ij = π j π ji Limiting N π distribution ( r ) = Ξ Λ N ( r ) U N dn e β + βµ d r N
17 7 Insertion/Deletion Trial Move 3. Analysis of Detailed Balance Forward-step dr min(, χ) 2 V Reverse-step min(, ) 2 N + χ old Detailed balance π i π ij = π j π ji βu + βµ N N βu + βµ ( N+ ) N+ r r r min(, χ) min(, ) dn = d ( N + ) χ e d d e d ΞΛ 2 V ΞΛ 2 N + new Limiting N π distribution ( r ) = Ξ Λ N ( r ) U N dn e β + βµ d r N
18 8 Insertion/Deletion Trial Move 3. Analysis of Detailed Balance Forward-step dr min(, χ) 2 V Reverse-step min(, ) 2 N + χ old Detailed balance π i π ij = π j π ji βu + βµ N N βu + βµ ( N+ ) N+ r r r min(, χ) min(, ) dn = d ( N + ) χ e d d e d ΞΛ 2 V ΞΛ 2 N + new Remember insert: (N+) = N old + delete: (N+) = N old old new U U e β e β χ = βµ V Λ ( N + ) V new old β( U U ) + βµ χ = e Λ ( N + Acceptance )
19 9 Biased Insertion/Deletion Trial Move. Specification Insertable region Trial-move algorithm. Choose with equal : Insertion identify region where insertion will not lead to overlap let the volume of this region be εv place randomly somewhere in this region Deletion select any molecule and delete it
20 20 Biased Insertion/Deletion Trial Move 2. Analysis of Trial Probabilities Detailed specification of moves and and probabilities Event [reverse event] Select insertion [select deletion ] Place molecule at r N+ [delete molecule N+] Probability [reverse ] ½ [½ ] dr/(εv) [/(N+)] Forward-step Reverse-step d min(, ) 2 r χ εv min(, ) 2 N + χ Accept move [accept move] min(,χ) [min(,/χ)] Only difference from unbiased algorithm
21 2 Biased Insertion/Deletion Trial Move old 3. Analysis of Detailed Balance Detailed balance π i π ij = π j π ji βu + βµ N N βu + βµ ( N+ ) N+ r r r min(, χ) min(, ) dn = d ( N + ) χ e d d e d ΞΛ 2 εv ΞΛ 2 N + new Remember insert: (N+) = N old + delete: (N+) = N old εv new old β( U U ) + βµ χ = e Λ ( N + ) Acceptance ε must be computed even when doing a deletion, since χ depends upon it for deletion, ε is computed for configuration after molecule is removed for insertion, ε is computed for configuration before molecule is inserted
22 22 Biased Insertion/Deletion Trial Move Advantage is gained when ε is small and for hard spheres near freezing βµ + ln( V / Λ N) : 6 7 ε : 0 χ : 4. Comments χ = εe is large (difficult to accept deletion without bias) (difficult to find acceptable insertion without bias) V e Λ ( N + ) βµ β ΔU e βµ Identifying and characterizing (computing ε) the non-overlap region may be difficult
23 23 Force-Bias Trial Move. Specification Move atom in preferentially in direction of lower energy select displacement δ r in a cubic volume centered on present position within this region, select with [ + λβf δr] C( f ) exp e p( δr) = = C = c x c y is a normalization constant + δ r max max λβ f δ r x xe c c x y c = e x xd( δ r ) = x δ r λβ f δ r y y f sinh( λβ f δ rmax ) λβ f λβ f δ r x x x 2δ r max Favors δr y in same direction as f y Pangali, Rao, and Berne, Chem. Phys. Lett (978)
24 An Aside: Sampling from a Distribution 24 Rejection method for sampling from a complex distribution p(x) write p(x) = Ca(x)b(x) a(x) is a simpler distribution b(x) lies between zero and unity recipe generate a uniform random variate U on (0,) generate a variate X on the distribution a(x) if U < b(x) then keep X if not, try again with a new U and X We wish to sample from p(x) = e qx for x = (-δ,+δ) we know how to sample on e q(x-x0) for x = (x 0, ) x= x0 qln[ U(0,)] use rejection method with a(x) = e q(x-δ) b(x) = 0 for x < -δ or x > +δ; otherwise i.e., sample on a(x) and reject values outside desired range
25 25 Force-Bias Trial Move 2. Analysis of Trial Probabilities Detailed specification of moves and and probabilities Event [reverse event] Select molecule k [select molecule k] Move to r new [move back to r old ] Probability [reverse ] /N [/N] p old (δr) [p new (-δr)] Forward-step Reverse-step p p old new ( δr) dr min(, χ) N ( δ r) dr min(, ) N χ Accept move [accept move] min(,χ) [min(,/χ)]
26 Force-Bias Trial Move 3. Analysis of Detailed Balance 26 Forward-step p old ( δr) dr min(, χ) N Reverse-step p new ( δ r) dr min(, ) N χ p( δr) [ + λβf δr] C( f ) exp = = λβ f δ r e x xe c c x y λβ f δ r y y Detailed balance π i π ij = π j π ji old old new new βu N + λβf δr βu N λβf δr e dr e e d e min(, χ) r = min(, ) Z old new N N C( ) ZN N χ f C( f ) Limiting N β ( r ) π ( r ) dr = e dr distribution Z N N U N N
27 Force-Bias Trial Move 3. Analysis of Detailed Balance 27 Forward-step p old ( δr) dr min(, χ) N Reverse-step p new ( δ r) dr min(, ) N χ p( δr) [ + λβf δr] C( f ) exp = = λβ f δ r e x xe c c x y λβ f δ r y y C old old new new βu + λβf δr βu λβf δr e Detailed balance π i π ij = π j π ji old old new new βu N + λβf δr βu N λβf δr e dr e e d e min(, χ) r = min(, ) Z old new N N C( ) ZN N χ f C( f ) χ = old new ( f ) C( f ) old C( f ) U U χ = new C( f ) ( ) ( ) e β λβ f + f δ r e new old new old Acceptance
28 Force-Bias Trial Move 4. Comments 28 Necessary to compute force both before and after move χ = C C old ( f ) new ( f ) new old new old ( U U ) ( ) e β λβ f + f δ r From definition of force f = U new old new old U U + δ λ = /2 makes argument of exponent nearly zero λ = 0 reduces to unbiased case Force-bias makes Monte Carlo more like molecular dynamics example of hybrid MC/MD method Improvement in convergence by factor or 2-3 observed worth the effort? 2 ( f f ) r
29 29 Association-Bias Trial Move. Specification Low-density, strongly attracting molecules when together, form strong associations that take long to break when apart, are slow to find each other to form associations performance of simulation is a problem Perform moves that put one molecule preferentially in vicinity of another suffer overlaps, maybe 50% of time compare to problem of finding associate only time in (say) 000 Must also preferentially attempt reverse move Attempt placement in this region, of volume εv
30 30 Association-Bias Trial Move. Specification With equal, choose a move: Association select a molecule that is not associated select another molecule (associated or not) put first molecule in volume ev in vicinity of second Dis-association select a molecule that is associated move it to a random position anywhere in the system
31 3 Association-Bias Trial Move 2. Analysis of Trial Probabilities Detailed specification of moves and and probabilities Event [reverse event] Select molecule k [select molecule k] Move to r new [move back to r old ] Probability [reverse ] /N un [/(N assoc +)] /(N assoc εv) (*) [/V] Forward-step min(, χ) NN u aεv Reverse-step min(, ) ( N + ) V χ a Accept move [accept move] min(,χ) [min(,/χ)] (*) incorrect
32 Association-Bias Trial Move 3. Analysis of Detailed Balance 32 Forward-step NNεV u a Reverse-step min(, χ) min(, ) ( N ) a + V χ old Detailed balance π i π ij = π j π ji βu N βu N e dr e dr min(, ) min(, ) Z χ N NuNaεV = Z N ( Na + ) V χ new new old Na ( U U ) N ( N ) u e β + χ = ε Acceptance a
33 Association-Bias Trial Move 4. Comments 33 This is incorrect! new old Na ( U U ) N N u e β + χ = ε a Need to account for full of positioning in r new This region has extra of being selected (in vicinity of two molecules) must look in local environment of position to see if it lies also in the neighborhood of other atoms add a /εv for each atom Algorithm requires to identify or keep track of number of associated/unassociated molecules
34 34 Using an Approximation Potential. Specification Evaluating the potential energy is the most time-consuming part of a simulation Some potentials are especially time-consuming, e.g. three-body potentials Ewald sum Idea: move system through Markov chain using an approximation to the real potential (cheaper to compute) at intervals, accept or reject entire subchain using correct potential True potential Approximate True potential
35 35 Approximation Potential 2. Analysis of Trial Probabilities What are π ij and π ji? State i 2 3 n State j Given that each elementary Markov step obeys detailed balance for the approximate potential one can show that the super-step i à j also obeys detailed balance (for the approximate potential) a ( n) a ( n) πi πij = π jπ ji very hard to analyze without this result would have to consider all paths from i to j to get transition
36 36 Approximation Potential 3. Analysis of Detailed Balance Formulate acceptance criterion to satisfy detailed balance for the real potential ( n) ( n) ππ i ij min(, χ) = πjπji min(,/ χ) Approximate-potential detailed balance a π a ( n) a ( n) j ( n) ( n) min(, ) min(,/ ) πi πij = π jπ ji i a π ji = j ji χ π i π χ π π State i 2 3 n State j
37 37 Approximation Potential 3. Analysis of Detailed Balance Formulate acceptance criterion to satisfy detailed balance for the real potential ( n) ( n) ππ i ij min(, χ) = πjπji min(,/ χ) Approximate-potential detailed balance a π a ( n) a ( n) j ( n) ( n) min(, ) min(,/ ) πi πij = π jπ ji i a π ji = j ji χ π i π χ π π State i 2 3 n State j
38 38 Approximation Potential 3. Analysis of Detailed Balance Formulate acceptance criterion to satisfy detailed balance for the real potential ( n) ( n) ij = j ji ππ min(, χ) π π min(,/ χ) i a j ( n) ( n) i a ji = j ji π i π π π min(, χ) π π min(,/ χ ) π π χ = π π j a j a i i Close to if approximate potential is good description of true potential State i 2 3 n State j
39 39 Summary Good Monte Carlo keeps the system moving among a wide variety of states At times sampling of wide distribution is not done well many states of comparable not easily reached few states of high hard to find and then escape Biasing the underlying transition probabilities can remedy problem add bias to underlying TPM remove bias in acceptance step so overall TPM is valid Examples insertion/deletion bias in GCMC force bias association bias using an approximate potential
Markov Processes. Stochastic process. Markov process
Markov Processes Stochastic process movement through a series of well-defined states in a way that involves some element of randomness for our purposes, states are microstates in the governing ensemble
More informationCE 530 Molecular Simulation
CE 530 Molecular Simulation Lecture 20 Phase Equilibria David A. Kofke Department of Chemical Engineering SUNY Buffalo kofke@eng.buffalo.edu 2 Thermodynamic Phase Equilibria Certain thermodynamic states
More informationMonte Carlo. Lecture 15 4/9/18. Harvard SEAS AP 275 Atomistic Modeling of Materials Boris Kozinsky
Monte Carlo Lecture 15 4/9/18 1 Sampling with dynamics In Molecular Dynamics we simulate evolution of a system over time according to Newton s equations, conserving energy Averages (thermodynamic properties)
More informationMonte Carlo Methods in Statistical Mechanics
Monte Carlo Methods in Statistical Mechanics Mario G. Del Pópolo Atomistic Simulation Centre School of Mathematics and Physics Queen s University Belfast Belfast Mario G. Del Pópolo Statistical Mechanics
More informationUB association bias algorithm applied to the simulation of hydrogen fluoride
Fluid Phase Equilibria 194 197 (2002) 249 256 UB association bias algorithm applied to the simulation of hydrogen fluoride Scott Wierzchowski, David A. Kofke Department of Chemical Engineering, University
More informationRandom Walks A&T and F&S 3.1.2
Random Walks A&T 110-123 and F&S 3.1.2 As we explained last time, it is very difficult to sample directly a general probability distribution. - If we sample from another distribution, the overlap will
More informationUnderstanding Molecular Simulation 2009 Monte Carlo and Molecular Dynamics in different ensembles. Srikanth Sastry
JNCASR August 20, 21 2009 Understanding Molecular Simulation 2009 Monte Carlo and Molecular Dynamics in different ensembles Srikanth Sastry Jawaharlal Nehru Centre for Advanced Scientific Research, Bangalore
More informationMultiple time step Monte Carlo
JOURNAL OF CHEMICAL PHYSICS VOLUME 117, NUMBER 18 8 NOVEMBER 2002 Multiple time step Monte Carlo Balázs Hetényi a) Department of Chemistry, Princeton University, Princeton, NJ 08544 and Department of Chemistry
More informationTHE DETAILED BALANCE ENERGY-SCALED DISPLACEMENT MONTE CARLO ALGORITHM
Molecular Simulation, 1987, Vol. 1, pp. 87-93 c Gordon and Breach Science Publishers S.A. THE DETAILED BALANCE ENERGY-SCALED DISPLACEMENT MONTE CARLO ALGORITHM M. MEZEI Department of Chemistry, Hunter
More informationAdvanced sampling. fluids of strongly orientation-dependent interactions (e.g., dipoles, hydrogen bonds)
Advanced sampling ChE210D Today's lecture: methods for facilitating equilibration and sampling in complex, frustrated, or slow-evolving systems Difficult-to-simulate systems Practically speaking, one is
More informationMarkov Chain Monte Carlo The Metropolis-Hastings Algorithm
Markov Chain Monte Carlo The Metropolis-Hastings Algorithm Anthony Trubiano April 11th, 2018 1 Introduction Markov Chain Monte Carlo (MCMC) methods are a class of algorithms for sampling from a probability
More informationCopyright 2001 University of Cambridge. Not to be quoted or copied without permission.
Course MP3 Lecture 4 13/11/2006 Monte Carlo method I An introduction to the use of the Monte Carlo method in materials modelling Dr James Elliott 4.1 Why Monte Carlo? The name derives from the association
More informationApril 20th, Advanced Topics in Machine Learning California Institute of Technology. Markov Chain Monte Carlo for Machine Learning
for for Advanced Topics in California Institute of Technology April 20th, 2017 1 / 50 Table of Contents for 1 2 3 4 2 / 50 History of methods for Enrico Fermi used to calculate incredibly accurate predictions
More informationData Analysis I. Dr Martin Hendry, Dept of Physics and Astronomy University of Glasgow, UK. 10 lectures, beginning October 2006
Astronomical p( y x, I) p( x, I) p ( x y, I) = p( y, I) Data Analysis I Dr Martin Hendry, Dept of Physics and Astronomy University of Glasgow, UK 10 lectures, beginning October 2006 4. Monte Carlo Methods
More informationICCP Project 2 - Advanced Monte Carlo Methods Choose one of the three options below
ICCP Project 2 - Advanced Monte Carlo Methods Choose one of the three options below Introduction In statistical physics Monte Carlo methods are considered to have started in the Manhattan project (1940
More information3.320: Lecture 19 (4/14/05) Free Energies and physical Coarse-graining. ,T) + < σ > dµ
3.320: Lecture 19 (4/14/05) F(µ,T) = F(µ ref,t) + < σ > dµ µ µ ref Free Energies and physical Coarse-graining T S(T) = S(T ref ) + T T ref C V T dt Non-Boltzmann sampling and Umbrella sampling Simple
More informationMonte Carlo Methods. Leon Gu CSD, CMU
Monte Carlo Methods Leon Gu CSD, CMU Approximate Inference EM: y-observed variables; x-hidden variables; θ-parameters; E-step: q(x) = p(x y, θ t 1 ) M-step: θ t = arg max E q(x) [log p(y, x θ)] θ Monte
More informationModeling the Free Energy Landscape for Janus Particle Self-Assembly in the Gas Phase. Andy Long Kridsanaphong Limtragool
Modeling the Free Energy Landscape for Janus Particle Self-Assembly in the Gas Phase Andy Long Kridsanaphong Limtragool Motivation We want to study the spontaneous formation of micelles and vesicles Applications
More informationApproximate inference in Energy-Based Models
CSC 2535: 2013 Lecture 3b Approximate inference in Energy-Based Models Geoffrey Hinton Two types of density model Stochastic generative model using directed acyclic graph (e.g. Bayes Net) Energy-based
More informationGaussian random variables inr n
Gaussian vectors Lecture 5 Gaussian random variables inr n One-dimensional case One-dimensional Gaussian density with mean and standard deviation (called N, ): fx x exp. Proposition If X N,, then ax b
More informationMonte Carlo (MC) Simulation Methods. Elisa Fadda
Monte Carlo (MC) Simulation Methods Elisa Fadda 1011-CH328, Molecular Modelling & Drug Design 2011 Experimental Observables A system observable is a property of the system state. The system state i is
More informationMonte Carlo Methods. Ensembles (Chapter 5) Biased Sampling (Chapter 14) Practical Aspects
Monte Carlo Methods Ensembles (Chapter 5) Biased Sampling (Chapter 14) Practical Aspects Lecture 1 2 Lecture 1&2 3 Lecture 1&3 4 Different Ensembles Ensemble ame Constant (Imposed) VT Canonical,V,T P PT
More informationLecture 7 and 8: Markov Chain Monte Carlo
Lecture 7 and 8: Markov Chain Monte Carlo 4F13: Machine Learning Zoubin Ghahramani and Carl Edward Rasmussen Department of Engineering University of Cambridge http://mlg.eng.cam.ac.uk/teaching/4f13/ Ghahramani
More informationMarkov Chain Monte Carlo Simulations and Their Statistical Analysis An Overview
Markov Chain Monte Carlo Simulations and Their Statistical Analysis An Overview Bernd Berg FSU, August 30, 2005 Content 1. Statistics as needed 2. Markov Chain Monte Carlo (MC) 3. Statistical Analysis
More informationComputer Vision Group Prof. Daniel Cremers. 11. Sampling Methods
Prof. Daniel Cremers 11. Sampling Methods Sampling Methods Sampling Methods are widely used in Computer Science as an approximation of a deterministic algorithm to represent uncertainty without a parametric
More informationMatSci 331 Homework 4 Molecular Dynamics and Monte Carlo: Stress, heat capacity, quantum nuclear effects, and simulated annealing
MatSci 331 Homework 4 Molecular Dynamics and Monte Carlo: Stress, heat capacity, quantum nuclear effects, and simulated annealing Due Thursday Feb. 21 at 5pm in Durand 110. Evan Reed In this homework,
More informationPolymer Simulations with Pruned-Enriched Rosenbluth Method I
Polymer Simulations with Pruned-Enriched Rosenbluth Method I Hsiao-Ping Hsu Institut für Physik, Johannes Gutenberg-Universität Mainz, Germany Polymer simulations with PERM I p. 1 Introduction Polymer:
More informationComputer Vision Group Prof. Daniel Cremers. 11. Sampling Methods: Markov Chain Monte Carlo
Group Prof. Daniel Cremers 11. Sampling Methods: Markov Chain Monte Carlo Markov Chain Monte Carlo In high-dimensional spaces, rejection sampling and importance sampling are very inefficient An alternative
More informationMONTE CARLO METHOD. Reference1: Smit Frenkel, Understanding molecular simulation, second edition, Academic press, 2002.
MONTE CARLO METHOD Reference1: Smit Frenkel, Understanding molecular simulation, second edition, Academic press, 2002. Reference 2: David P. Landau., Kurt Binder., A Guide to Monte Carlo Simulations in
More informationMonte Carlo Simulation of the Ising Model. Abstract
Monte Carlo Simulation of the Ising Model Saryu Jindal 1 1 Department of Chemical Engineering and Material Sciences, University of California, Davis, CA 95616 (Dated: June 9, 2007) Abstract This paper
More informationCE 530 Molecular Simulation
CE 530 Molecular Simulation Lecture 7 Beyond Atoms: Simulating Molecules David A. Kofke Department of Chemical Engineering SUNY Buffalo kofke@eng.buffalo.edu Review Fundamentals units, properties, statistical
More informationBrief introduction to Markov Chain Monte Carlo
Brief introduction to Department of Probability and Mathematical Statistics seminar Stochastic modeling in economics and finance November 7, 2011 Brief introduction to Content 1 and motivation Classical
More informationMarkov chain Monte Carlo Lecture 9
Markov chain Monte Carlo Lecture 9 David Sontag New York University Slides adapted from Eric Xing and Qirong Ho (CMU) Limitations of Monte Carlo Direct (unconditional) sampling Hard to get rare events
More informationMolecular Simulation Study of the Vapor Liquid Interfacial Behavior of a Dimer-forming Associating Fluid
Molecular Simulation, Vol. 30 (6), 15 May 2004, pp. 343 351 Molecular Simulation Study of the Vapor Liquid Interfacial Behavior of a Dimer-forming Associating Fluid JAYANT K. SINGH and DAVID A. KOFKE*
More informationCE 530 Molecular Simulation
1 CE 530 Molecular Simulation Lecture 14 Molecular Models David A. Kofke Department of Chemical Engineering SUNY Buffalo kofke@eng.buffalo.edu 2 Review Monte Carlo ensemble averaging, no dynamics easy
More informationMonte Carlo integration (naive Monte Carlo)
Monte Carlo integration (naive Monte Carlo) Example: Calculate π by MC integration [xpi] 1/21 INTEGER n total # of points INTEGER i INTEGER nu # of points in a circle REAL x,y coordinates of a point in
More information3.320 Lecture 18 (4/12/05)
3.320 Lecture 18 (4/12/05) Monte Carlo Simulation II and free energies Figure by MIT OCW. General Statistical Mechanics References D. Chandler, Introduction to Modern Statistical Mechanics D.A. McQuarrie,
More informationHere each term has degree 2 (the sum of exponents is 2 for all summands). A quadratic form of three variables looks as
Reading [SB], Ch. 16.1-16.3, p. 375-393 1 Quadratic Forms A quadratic function f : R R has the form f(x) = a x. Generalization of this notion to two variables is the quadratic form Q(x 1, x ) = a 11 x
More informationIntroduction to Machine Learning CMU-10701
Introduction to Machine Learning CMU-10701 Markov Chain Monte Carlo Methods Barnabás Póczos & Aarti Singh Contents Markov Chain Monte Carlo Methods Goal & Motivation Sampling Rejection Importance Markov
More informationMonte Carlo importance sampling and Markov chain
Monte Carlo importance sampling and Markov chain If a configuration in phase space is denoted by X, the probability for configuration according to Boltzman is ρ(x) e βe(x) β = 1 T (1) How to sample over
More informationAnswers and expectations
Answers and expectations For a function f(x) and distribution P(x), the expectation of f with respect to P is The expectation is the average of f, when x is drawn from the probability distribution P E
More informationLangevin Methods. Burkhard Dünweg Max Planck Institute for Polymer Research Ackermannweg 10 D Mainz Germany
Langevin Methods Burkhard Dünweg Max Planck Institute for Polymer Research Ackermannweg 1 D 55128 Mainz Germany Motivation Original idea: Fast and slow degrees of freedom Example: Brownian motion Replace
More informationHow the maximum step size in Monte Carlo simulations should be adjusted
Physics Procedia Physics Procedia 00 (2013) 1 6 How the maximum step size in Monte Carlo simulations should be adjusted Robert H. Swendsen Physics Department, Carnegie Mellon University, Pittsburgh, PA
More informationUndirected Graphical Models
Undirected Graphical Models 1 Conditional Independence Graphs Let G = (V, E) be an undirected graph with vertex set V and edge set E, and let A, B, and C be subsets of vertices. We say that C separates
More informationComputer Vision Group Prof. Daniel Cremers. 10a. Markov Chain Monte Carlo
Group Prof. Daniel Cremers 10a. Markov Chain Monte Carlo Markov Chain Monte Carlo In high-dimensional spaces, rejection sampling and importance sampling are very inefficient An alternative is Markov Chain
More informationGeneral Construction of Irreversible Kernel in Markov Chain Monte Carlo
General Construction of Irreversible Kernel in Markov Chain Monte Carlo Metropolis heat bath Suwa Todo Department of Applied Physics, The University of Tokyo Department of Physics, Boston University (from
More informationCE 530 Molecular Simulation
1 CE 530 Molecular Simulation Lecture 16 Dielectrics and Reaction Field Method David A. Kofke Department of Chemical Engineering SUNY Buffalo kofke@eng.buffalo.edu 2 Review Molecular models intramolecular
More informationStatistical Thermodynamics and Monte-Carlo Evgenii B. Rudnyi and Jan G. Korvink IMTEK Albert Ludwig University Freiburg, Germany
Statistical Thermodynamics and Monte-Carlo Evgenii B. Rudnyi and Jan G. Korvink IMTEK Albert Ludwig University Freiburg, Germany Preliminaries Learning Goals From Micro to Macro Statistical Mechanics (Statistical
More informationAndré Schleife Department of Materials Science and Engineering
André Schleife Department of Materials Science and Engineering Length Scales (c) ICAMS: http://www.icams.de/cms/upload/01_home/01_research_at_icams/length_scales_1024x780.png Goals for today: Background
More informationNumerical methods for lattice field theory
Numerical methods for lattice field theory Mike Peardon Trinity College Dublin August 9, 2007 Mike Peardon (Trinity College Dublin) Numerical methods for lattice field theory August 9, 2007 1 / 24 Numerical
More informationPhase Equilibria and Molecular Solutions Jan G. Korvink and Evgenii Rudnyi IMTEK Albert Ludwig University Freiburg, Germany
Phase Equilibria and Molecular Solutions Jan G. Korvink and Evgenii Rudnyi IMTEK Albert Ludwig University Freiburg, Germany Preliminaries Learning Goals Phase Equilibria Phase diagrams and classical thermodynamics
More informationDisordered Hyperuniformity: Liquid-like Behaviour in Structural Solids, A New Phase of Matter?
Disordered Hyperuniformity: Liquid-like Behaviour in Structural Solids, A New Phase of Matter? Kabir Ramola Martin Fisher School of Physics, Brandeis University August 19, 2016 Kabir Ramola Disordered
More informationLecture 2+3: Simulations of Soft Matter. 1. Why Lecture 1 was irrelevant 2. Coarse graining 3. Phase equilibria 4. Applications
Lecture 2+3: Simulations of Soft Matter 1. Why Lecture 1 was irrelevant 2. Coarse graining 3. Phase equilibria 4. Applications D. Frenkel, Boulder, July 6, 2006 What distinguishes Colloids from atoms or
More informationNumerical integration and importance sampling
2 Numerical integration and importance sampling 2.1 Quadrature Consider the numerical evaluation of the integral I(a, b) = b a dx f(x) Rectangle rule: on small interval, construct interpolating function
More informationElastic constants and the effect of strain on monovacancy concentration in fcc hard-sphere crystals
PHYSICAL REVIEW B 70, 214113 (2004) Elastic constants and the effect of strain on monovacancy concentration in fcc hard-sphere crystals Sang Kyu Kwak and David A. Kofke Department of Chemical and Biological
More informationCluster Algorithms to Reduce Critical Slowing Down
Cluster Algorithms to Reduce Critical Slowing Down Monte Carlo simulations close to a phase transition are affected by critical slowing down. In the 2-D Ising system, the correlation length ξ becomes very
More informationUnloading the dice. Minimising biases in Full Configuration Interaction Quantum Monte Carlo. William Vigor, James Spencer, Michael Bearpark, Alex Thom
Minimising biases in Full Configuration Interaction Quantum Monte Carlo William Vigor 1, James Spencer 2,3, Michael Bearpark 1, Alex Thom 1,4 1 Department of Chemistry, Imperial College London 2 Department
More informationA Search and Jump Algorithm for Markov Chain Monte Carlo Sampling. Christopher Jennison. Adriana Ibrahim. Seminar at University of Kuwait
A Search and Jump Algorithm for Markov Chain Monte Carlo Sampling Christopher Jennison Department of Mathematical Sciences, University of Bath, UK http://people.bath.ac.uk/mascj Adriana Ibrahim Institute
More informationLECTURE 10: Monte Carlo Methods II
1 LECTURE 10: Monte Carlo Methods II In this chapter, we discuss more advanced Monte Carlo techniques, starting with the topics of percolation and random walks, and then continuing to equilibrium statistical
More informationSTAT 536: Genetic Statistics
STAT 536: Genetic Statistics Tests for Hardy Weinberg Equilibrium Karin S. Dorman Department of Statistics Iowa State University September 7, 2006 Statistical Hypothesis Testing Identify a hypothesis,
More informationJ ij S i S j B i S i (1)
LECTURE 18 The Ising Model (References: Kerson Huang, Statistical Mechanics, Wiley and Sons (1963) and Colin Thompson, Mathematical Statistical Mechanics, Princeton Univ. Press (1972)). One of the simplest
More informationOptimization Methods via Simulation
Optimization Methods via Simulation Optimization problems are very important in science, engineering, industry,. Examples: Traveling salesman problem Circuit-board design Car-Parrinello ab initio MD Protein
More informationPhase transitions and finite-size scaling
Phase transitions and finite-size scaling Critical slowing down and cluster methods. Theory of phase transitions/ RNG Finite-size scaling Detailed treatment: Lectures on Phase Transitions and the Renormalization
More informationThe Particle Filter. PD Dr. Rudolph Triebel Computer Vision Group. Machine Learning for Computer Vision
The Particle Filter Non-parametric implementation of Bayes filter Represents the belief (posterior) random state samples. by a set of This representation is approximate. Can represent distributions that
More informationThere are self-avoiding walks of steps on Z 3
There are 7 10 26 018 276 self-avoiding walks of 38 797 311 steps on Z 3 Nathan Clisby MASCOS, The University of Melbourne Institut für Theoretische Physik Universität Leipzig November 9, 2012 1 / 37 Outline
More informationGases and the Virial Expansion
Gases and the irial Expansion February 7, 3 First task is to examine what ensemble theory tells us about simple systems via the thermodynamic connection Calculate thermodynamic quantities: average energy,
More informationMonte Carlo and cold gases. Lode Pollet.
Monte Carlo and cold gases Lode Pollet lpollet@physics.harvard.edu 1 Outline Classical Monte Carlo The Monte Carlo trick Markov chains Metropolis algorithm Ising model critical slowing down Quantum Monte
More informationProblem Set 2. MAS 622J/1.126J: Pattern Recognition and Analysis. Due: 5:00 p.m. on September 30
Problem Set 2 MAS 622J/1.126J: Pattern Recognition and Analysis Due: 5:00 p.m. on September 30 [Note: All instructions to plot data or write a program should be carried out using Matlab. In order to maintain
More informationComputer Vision Group Prof. Daniel Cremers. 14. Sampling Methods
Prof. Daniel Cremers 14. Sampling Methods Sampling Methods Sampling Methods are widely used in Computer Science as an approximation of a deterministic algorithm to represent uncertainty without a parametric
More informationInferring from data. Theory of estimators
Inferring from data Theory of estimators 1 Estimators Estimator is any function of the data e(x) used to provide an estimate ( a measurement ) of an unknown parameter. Because estimators are functions
More informationQuantifying Uncertainty
Sai Ravela M. I. T Last Updated: Spring 2013 1 Markov Chain Monte Carlo Monte Carlo sampling made for large scale problems via Markov Chains Monte Carlo Sampling Rejection Sampling Importance Sampling
More informationSimulation - Lectures - Part III Markov chain Monte Carlo
Simulation - Lectures - Part III Markov chain Monte Carlo Julien Berestycki Part A Simulation and Statistical Programming Hilary Term 2018 Part A Simulation. HT 2018. J. Berestycki. 1 / 50 Outline Markov
More informationSTA 294: Stochastic Processes & Bayesian Nonparametrics
MARKOV CHAINS AND CONVERGENCE CONCEPTS Markov chains are among the simplest stochastic processes, just one step beyond iid sequences of random variables. Traditionally they ve been used in modelling a
More informationLecture V: Multicanonical Simulations.
Lecture V: Multicanonical Simulations. 1. Multicanonical Ensemble 2. How to get the Weights? 3. Example Runs (2d Ising and Potts models) 4. Re-Weighting to the Canonical Ensemble 5. Energy and Specific
More informationHill climbing: Simulated annealing and Tabu search
Hill climbing: Simulated annealing and Tabu search Heuristic algorithms Giovanni Righini University of Milan Department of Computer Science (Crema) Hill climbing Instead of repeating local search, it is
More informationComputer Simulation of Peptide Adsorption
Computer Simulation of Peptide Adsorption M P Allen Department of Physics University of Warwick Leipzig, 28 November 2013 1 Peptides Leipzig, 28 November 2013 Outline Lattice Peptide Monte Carlo 1 Lattice
More informationI. QUANTUM MONTE CARLO METHODS: INTRODUCTION AND BASICS
I. QUANTUM MONTE CARLO METHODS: INTRODUCTION AND BASICS Markus Holzmann LPMMC, UJF, Grenoble, and LPTMC, UPMC, Paris markus@lptl.jussieu.fr http://www.lptl.jussieu.fr/users/markus (Dated: January 24, 2012)
More informationGeneralized Ensembles: Multicanonical Simulations
Generalized Ensembles: Multicanonical Simulations 1. Multicanonical Ensemble 2. How to get the Weights? 3. Example Runs and Re-Weighting to the Canonical Ensemble 4. Energy and Specific Heat Calculation
More informationOptimization of quantum Monte Carlo (QMC) wave functions by energy minimization
Optimization of quantum Monte Carlo (QMC) wave functions by energy minimization Julien Toulouse, Cyrus Umrigar, Roland Assaraf Cornell Theory Center, Cornell University, Ithaca, New York, USA. Laboratoire
More informationCS281A/Stat241A Lecture 22
CS281A/Stat241A Lecture 22 p. 1/4 CS281A/Stat241A Lecture 22 Monte Carlo Methods Peter Bartlett CS281A/Stat241A Lecture 22 p. 2/4 Key ideas of this lecture Sampling in Bayesian methods: Predictive distribution
More informationStaging Is More Important than Perturbation Method for Computation of Enthalpy and Entropy Changes in Complex Systems
5598 J. Phys. Chem. B 2003, 107, 5598-5611 Staging Is More Important than Perturbation Method for Computation of Enthalpy and Entropy Changes in Complex Systems Nandou Lu,*,, David A. Kofke, and Thomas
More informationMetropolis/Variational Monte Carlo. Microscopic Theories of Nuclear Structure, Dynamics and Electroweak Currents June 12-30, 2017, ECT*, Trento, Italy
Metropolis/Variational Monte Carlo Stefano Gandolfi Los Alamos National Laboratory (LANL) Microscopic Theories of Nuclear Structure, Dynamics and Electroweak Currents June 12-30, 2017, ECT*, Trento, Italy
More informationWang-Landau Monte Carlo simulation. Aleš Vítek IT4I, VP3
Wang-Landau Monte Carlo simulation Aleš Vítek IT4I, VP3 PART 1 Classical Monte Carlo Methods Outline 1. Statistical thermodynamics, ensembles 2. Numerical evaluation of integrals, crude Monte Carlo (MC)
More informationMCMC and Gibbs Sampling. Kayhan Batmanghelich
MCMC and Gibbs Sampling Kayhan Batmanghelich 1 Approaches to inference l Exact inference algorithms l l l The elimination algorithm Message-passing algorithm (sum-product, belief propagation) The junction
More informationLecture 5. G. Cowan Lectures on Statistical Data Analysis Lecture 5 page 1
Lecture 5 1 Probability (90 min.) Definition, Bayes theorem, probability densities and their properties, catalogue of pdfs, Monte Carlo 2 Statistical tests (90 min.) general concepts, test statistics,
More informationMonte Carlo Radiation Transfer I
Monte Carlo Radiation Transfer I Monte Carlo Photons and interactions Sampling from probability distributions Optical depths, isotropic emission, scattering Monte Carlo Basics Emit energy packet, hereafter
More informationRobert Collins CSE586, PSU. Markov-Chain Monte Carlo
Markov-Chain Monte Carlo References Problem Intuition: In high dimension problems, the Typical Set (volume of nonnegligable prob in state space) is a small fraction of the total space. High-Dimensional
More informationDimensionality Reduction and Principal Components
Dimensionality Reduction and Principal Components Nuno Vasconcelos (Ken Kreutz-Delgado) UCSD Motivation Recall, in Bayesian decision theory we have: World: States Y in {1,..., M} and observations of X
More informationExploring the energy landscape
Exploring the energy landscape ChE210D Today's lecture: what are general features of the potential energy surface and how can we locate and characterize minima on it Derivatives of the potential energy
More informationIntroduction to Machine Learning CMU-10701
Introduction to Machine Learning CMU-10701 Markov Chain Monte Carlo Methods Barnabás Póczos Contents Markov Chain Monte Carlo Methods Sampling Rejection Importance Hastings-Metropolis Gibbs Markov Chains
More informationSTAT 425: Introduction to Bayesian Analysis
STAT 425: Introduction to Bayesian Analysis Marina Vannucci Rice University, USA Fall 2017 Marina Vannucci (Rice University, USA) Bayesian Analysis (Part 2) Fall 2017 1 / 19 Part 2: Markov chain Monte
More informationAn Introduction to Two Phase Molecular Dynamics Simulation
An Introduction to Two Phase Molecular Dynamics Simulation David Keffer Department of Materials Science & Engineering University of Tennessee, Knoxville date begun: April 19, 2016 date last updated: April
More informationQUANTUM MECHANICS I PHYS 516. Solutions to Problem Set # 5
QUANTUM MECHANICS I PHYS 56 Solutions to Problem Set # 5. Crossed E and B fields: A hydrogen atom in the N 2 level is subject to crossed electric and magnetic fields. Choose your coordinate axes to make
More informationCosmology & CMB. Set5: Data Analysis. Davide Maino
Cosmology & CMB Set5: Data Analysis Davide Maino Gaussian Statistics Statistical isotropy states only two-point correlation function is needed and it is related to power spectrum Θ(ˆn) = lm Θ lm Y lm (ˆn)
More informationPhase Equilibria of binary mixtures by Molecular Simulation and PR-EOS: Methane + Xenon and Xenon + Ethane
International Journal of ChemTech Research CODEN( USA): IJCRGG ISSN : 0974-4290 Vol.5, No.6, pp 2975-2979, Oct-Dec 2013 Phase Equilibria of binary mixtures by Molecular Simulation and PR-EOS: Methane +
More information6 Markov Chain Monte Carlo (MCMC)
6 Markov Chain Monte Carlo (MCMC) The underlying idea in MCMC is to replace the iid samples of basic MC methods, with dependent samples from an ergodic Markov chain, whose limiting (stationary) distribution
More informationFrom Rosenbluth Sampling to PERM - rare event sampling with stochastic growth algorithms
Lecture given at the International Summer School Modern Computational Science 2012 Optimization (August 20-31, 2012, Oldenburg, Germany) From Rosenbluth Sampling to PERM - rare event sampling with stochastic
More informationPotts And XY, Together At Last
Potts And XY, Together At Last Daniel Kolodrubetz Massachusetts Institute of Technology, Center for Theoretical Physics (Dated: May 16, 212) We investigate the behavior of an XY model coupled multiplicatively
More informationGaussian Quiz. Preamble to The Humble Gaussian Distribution. David MacKay 1
Preamble to The Humble Gaussian Distribution. David MacKay Gaussian Quiz H y y y 3. Assuming that the variables y, y, y 3 in this belief network have a joint Gaussian distribution, which of the following
More informationPolymer Solution Thermodynamics:
Polymer Solution Thermodynamics: 3. Dilute Solutions with Volume Interactions Brownian particle Polymer coil Self-Avoiding Walk Models While the Gaussian coil model is useful for describing polymer solutions
More information