CE 530 Molecular Simulation

Similar documents
Markov Processes. Stochastic process. Markov process

CE 530 Molecular Simulation

Monte Carlo. Lecture 15 4/9/18. Harvard SEAS AP 275 Atomistic Modeling of Materials Boris Kozinsky

Monte Carlo Methods in Statistical Mechanics

UB association bias algorithm applied to the simulation of hydrogen fluoride

Random Walks A&T and F&S 3.1.2

Understanding Molecular Simulation 2009 Monte Carlo and Molecular Dynamics in different ensembles. Srikanth Sastry

Multiple time step Monte Carlo

THE DETAILED BALANCE ENERGY-SCALED DISPLACEMENT MONTE CARLO ALGORITHM

Advanced sampling. fluids of strongly orientation-dependent interactions (e.g., dipoles, hydrogen bonds)

Markov Chain Monte Carlo The Metropolis-Hastings Algorithm

Copyright 2001 University of Cambridge. Not to be quoted or copied without permission.

April 20th, Advanced Topics in Machine Learning California Institute of Technology. Markov Chain Monte Carlo for Machine Learning

Data Analysis I. Dr Martin Hendry, Dept of Physics and Astronomy University of Glasgow, UK. 10 lectures, beginning October 2006

ICCP Project 2 - Advanced Monte Carlo Methods Choose one of the three options below

3.320: Lecture 19 (4/14/05) Free Energies and physical Coarse-graining. ,T) + < σ > dµ

Monte Carlo Methods. Leon Gu CSD, CMU

Modeling the Free Energy Landscape for Janus Particle Self-Assembly in the Gas Phase. Andy Long Kridsanaphong Limtragool

Approximate inference in Energy-Based Models

Gaussian random variables inr n

Monte Carlo (MC) Simulation Methods. Elisa Fadda

Monte Carlo Methods. Ensembles (Chapter 5) Biased Sampling (Chapter 14) Practical Aspects

Lecture 7 and 8: Markov Chain Monte Carlo

Markov Chain Monte Carlo Simulations and Their Statistical Analysis An Overview

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods

MatSci 331 Homework 4 Molecular Dynamics and Monte Carlo: Stress, heat capacity, quantum nuclear effects, and simulated annealing

Polymer Simulations with Pruned-Enriched Rosenbluth Method I

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods: Markov Chain Monte Carlo

MONTE CARLO METHOD. Reference1: Smit Frenkel, Understanding molecular simulation, second edition, Academic press, 2002.

Monte Carlo Simulation of the Ising Model. Abstract

CE 530 Molecular Simulation

Brief introduction to Markov Chain Monte Carlo

Markov chain Monte Carlo Lecture 9

Molecular Simulation Study of the Vapor Liquid Interfacial Behavior of a Dimer-forming Associating Fluid

CE 530 Molecular Simulation

Monte Carlo integration (naive Monte Carlo)

3.320 Lecture 18 (4/12/05)

Here each term has degree 2 (the sum of exponents is 2 for all summands). A quadratic form of three variables looks as

Introduction to Machine Learning CMU-10701

Monte Carlo importance sampling and Markov chain

Answers and expectations

Langevin Methods. Burkhard Dünweg Max Planck Institute for Polymer Research Ackermannweg 10 D Mainz Germany

How the maximum step size in Monte Carlo simulations should be adjusted

Undirected Graphical Models

Computer Vision Group Prof. Daniel Cremers. 10a. Markov Chain Monte Carlo

General Construction of Irreversible Kernel in Markov Chain Monte Carlo

CE 530 Molecular Simulation

Statistical Thermodynamics and Monte-Carlo Evgenii B. Rudnyi and Jan G. Korvink IMTEK Albert Ludwig University Freiburg, Germany

André Schleife Department of Materials Science and Engineering

Numerical methods for lattice field theory

Phase Equilibria and Molecular Solutions Jan G. Korvink and Evgenii Rudnyi IMTEK Albert Ludwig University Freiburg, Germany

Disordered Hyperuniformity: Liquid-like Behaviour in Structural Solids, A New Phase of Matter?

Lecture 2+3: Simulations of Soft Matter. 1. Why Lecture 1 was irrelevant 2. Coarse graining 3. Phase equilibria 4. Applications

Numerical integration and importance sampling

Elastic constants and the effect of strain on monovacancy concentration in fcc hard-sphere crystals

Cluster Algorithms to Reduce Critical Slowing Down

Unloading the dice. Minimising biases in Full Configuration Interaction Quantum Monte Carlo. William Vigor, James Spencer, Michael Bearpark, Alex Thom

A Search and Jump Algorithm for Markov Chain Monte Carlo Sampling. Christopher Jennison. Adriana Ibrahim. Seminar at University of Kuwait

LECTURE 10: Monte Carlo Methods II

STAT 536: Genetic Statistics

J ij S i S j B i S i (1)

Optimization Methods via Simulation

Phase transitions and finite-size scaling

The Particle Filter. PD Dr. Rudolph Triebel Computer Vision Group. Machine Learning for Computer Vision

There are self-avoiding walks of steps on Z 3

Gases and the Virial Expansion

Monte Carlo and cold gases. Lode Pollet.

Problem Set 2. MAS 622J/1.126J: Pattern Recognition and Analysis. Due: 5:00 p.m. on September 30

Computer Vision Group Prof. Daniel Cremers. 14. Sampling Methods

Inferring from data. Theory of estimators

Quantifying Uncertainty

Simulation - Lectures - Part III Markov chain Monte Carlo

STA 294: Stochastic Processes & Bayesian Nonparametrics

Lecture V: Multicanonical Simulations.

Hill climbing: Simulated annealing and Tabu search

Computer Simulation of Peptide Adsorption

I. QUANTUM MONTE CARLO METHODS: INTRODUCTION AND BASICS

Generalized Ensembles: Multicanonical Simulations

Optimization of quantum Monte Carlo (QMC) wave functions by energy minimization

CS281A/Stat241A Lecture 22

Staging Is More Important than Perturbation Method for Computation of Enthalpy and Entropy Changes in Complex Systems

Metropolis/Variational Monte Carlo. Microscopic Theories of Nuclear Structure, Dynamics and Electroweak Currents June 12-30, 2017, ECT*, Trento, Italy

Wang-Landau Monte Carlo simulation. Aleš Vítek IT4I, VP3

MCMC and Gibbs Sampling. Kayhan Batmanghelich

Lecture 5. G. Cowan Lectures on Statistical Data Analysis Lecture 5 page 1

Monte Carlo Radiation Transfer I

Robert Collins CSE586, PSU. Markov-Chain Monte Carlo

Dimensionality Reduction and Principal Components

Exploring the energy landscape

Introduction to Machine Learning CMU-10701

STAT 425: Introduction to Bayesian Analysis

An Introduction to Two Phase Molecular Dynamics Simulation

QUANTUM MECHANICS I PHYS 516. Solutions to Problem Set # 5

Cosmology & CMB. Set5: Data Analysis. Davide Maino

Phase Equilibria of binary mixtures by Molecular Simulation and PR-EOS: Methane + Xenon and Xenon + Ethane

6 Markov Chain Monte Carlo (MCMC)

From Rosenbluth Sampling to PERM - rare event sampling with stochastic growth algorithms

Potts And XY, Together At Last

Gaussian Quiz. Preamble to The Humble Gaussian Distribution. David MacKay 1

Polymer Solution Thermodynamics:

Transcription:

CE 530 Molecular Simulation Lecture 0 Simple Biasing Methods David A. Kofke Department of Chemical Engineering SUNY Buffalo kofke@eng.buffalo.edu

2 Review Monte Carlo simulation Markov chain to generate elements of ensemble with proper distribution Metropolis algorithm relies on microscopic reversibility ππ two parts to a Markov step generate move (underlying transition matrix) decide to accept move or keep original state = π π i ij j ji Determination of acceptance probabilities detailed analysis of forward and reverse moves we examined molecule displacement and volume-change s

3 Performance Measures How do we improve the performance of a MC simulation? characterization of performance means to improve performance Return to our consideration of a general Markov process fixed number of well defined states fully specified transition- matrix use our three-state prototype Performance measures rate of convergence variance in occupancies π π π Π π π π π π π 2 3 2 22 23 3 32 33

4 Rate of Convergence. What is the likely distribution of states after a run of finite length? Is it close to the limiting distribution? ( n) ( n) ( n) π π2 π3 ( n) (0) n ( n) ( n) ( n) ( n) ( n) ( n) = Π 2 22 23 = 2 3 ( n) ( n) ( n) π3 π32 π 33 ( 0 0) ( ) π π π π π π π π Probability of being in state 3 after n moves, beginning in state We can apply similarity transforms to understand behavior of eigenvector equation ΠΦ = ΦΛ Π = ΦΛΦ n Π λ 0 0 eigenvalue matrix: Λ= 0 λ2 0 0 0 λ 3 eigenvector matrix Φ= φ φ 2 φ 3

5 Rate of Convergence. What is the likely distribution of states after a run of finite length? Is it close to the limiting distribution? ( n) ( n) ( n) π π2 π3 ( n) (0) n ( n) ( n) ( n) ( n) ( n) ( n) = Π 2 22 23 = 2 3 ( n) ( n) ( n) π3 π32 π 33 ( 0 0) ( ) π π π π π π π π Probability of being in state 3 after n moves, beginning in state We can apply similarity transforms to understand behavior of eigenvector equation n ( )( ) ( ) ( ) Π = ΦΛΦ ΦΛΦ...(n times)... Φ =ΦΛ Φ Φ Λ Φ Φ n =ΦΛ Φ ΠΦ = ΦΛ Π = ΦΛΦ... n λ 0 0 n n Λ = 0 λ2 0 n 0 0 λ 3 n Π

6 Rate of Convergence 2. Likely distribution after finite run ( n) (0) n (0) n i = i Π = i ΦΛ Φ π π π n λ 0 0 0 0 0 0 n n n Λ = 0 λ2 0 = 0 λ2 0 0 0 0 = n n n 0 0 λ 0 0 0 3 0 0 λ3 Convergence rate determined by magnitude of other eigenvalues very close to unity indicates slow convergence

7 Occupancy Variance. Imagine repeating Markov sequence many times (Là ), each time taking a fixed number of steps, M tabulate histogram for each sequence; examine variances in occupancy fraction σ 2 i L k = ( k) ( p ) 2 i πi = ( k) i p = through propagation of error, the occupancy (co)variances sum to give the variances in the ensemble averages; e.g. (for a 2-state system) L ( k ) m i M ( k) ( k) ( pi )( pj ) σσ = π π i j i j k = 2 3 4... L 2 = 2 2 2 2 + 2 2 + 2 2 2 σ U U σ U σ UU σ σ we would like these to be small

8 Occupancy Variance 2. A formula for the occupancy (co)variance is known 2 2 i i isii Mσ = π + 2π variance Mσσ = ππ + πs + π s i j i j i ij j ji covariance S = ( I Π+Φ) Φ right-hand sides independent of M standard deviation decreases as / M

9 Example Performance Values Limiting distribution π = ( 0.25 0.5 0.25) Inefficient Barker Most efficient Metropolis 0.97 0.02 0.0 Π= 0.0 0.98 0.0 0.0 0.02 0.97 0.42 0.33 0.25 Π= 0.7 0.66 0.7 0.25 0.33 0.42 0 0 Π= 0.5 0 0.5 0 0 0.0 0.5 0.5 Π= 0.25 0.5 0.25 0.5 0.5 0.0 λ = ( 0.96 0.96 ) λ = ( 0.33 0.7 ) λ = ( 0 ) λ = ( 0 0.5 ) 9.2 6. 3. Σ= 6. 2.2 6. 3. 6. 9.2 0.30 0.25 0.05 Σ= 0.25 0.50 0.25 0.05 0.25 0.30 0.25 0 0.25 Σ= 0 0 0 0.25 0 0.25 0.0 0.25 0.02 Σ= 0.25 0.25 0.25 0.02 0.25 0.0

0 Example Performance Values 0 0.99 0.0 0 0.99 0 0 0.0 Π= 0 0.0 0 0.99 0 0.0 0.99 0 π = ( 0.25 0.25 0.25 0.25) λ = ( 0.98 0.99 0.99) 6.2 6.2 6.2 6.2 6.2 6.2 6.2 6.2 Σ= 6.2 6.2 6.2 6.2 6.2 6.2 6.2 6.2 Lots of movement à 2; 3 à 4 Little movement (,2) à (3,4) Limiting distribution Eigenvalues Covariance matrix

Heuristics to Improve Performance Keep the system moving minimize diagonal elements of matrix avoid repeated transitions among a few states Typical physical situations where convergence is poor large number of equivalent states with poor transitions between regions of them entangled polymers large number of low- states and a few high- states low-density associating systems Low, nonzero region High region Γ Bottleneck Phase space

2 Biasing the Underlying Markov Process Detailed balance for /acceptance Markov process πτ i ij min(, χ) = πjτ ji min(,/ χ) Often it happens that τ ij is small while χ is large (or vice-versa) even if product is of order unity, π ij will be small because of min() The underlying TPM can be adjusted (biased) to enhance movement among states bias can be removed in reverse, or acceptance require in general πτ j χ = πτ ji i ij ideally, χ will be unity (all s accepted) even for a large change rarely achieve this level of improvement requires coordination of forward and reverse moves

3 Example: Biased Insertion in GCMC Grand-canonical Monte Carlo (µvt) fluctuations in N require insertion/deletion s at high density, insertions may be rarely accepted τ ij is small for j a state having additional but non-overlapping molecule at high chemical potential, limiting distribution strongly favors additional molecules π e βµ N χ is large for (N+) state with no overlap apply biasing to improve acceptance first look at unbiased algorithm

4 Insertion/Deletion Trial Move. Specification Gives new configuration of same volume but different number of molecules Choose with equal : insertion : add a molecule to a randomly selected position deletion : remove a randomly selected molecule from the system Limiting distribution grand-canonical ensemble π N ( r ) = Ξ Λ N ( r ) U N dn e β + βµ d r N

5 Insertion/Deletion Trial Move 2. Analysis of Trial Probabilities Detailed specification of moves and and probabilities Event [reverse event] Select insertion [select deletion ] Place molecule at r N+ [delete molecule N+] Probability [reverse ] ½ [½ ] dr/v [/(N+)] Forward-step Reverse-step dr min(, χ) 2 V min(, ) 2 N + χ Accept move [accept move] min(,χ) [min(,/χ)] χ is formulated to satisfy detailed balance

6 Insertion/Deletion Trial Move 3. Analysis of Detailed Balance Forward-step dr min(, χ) 2 V Reverse-step min(, ) 2 N + χ Detailed balance π i π ij = π j π ji Limiting N π distribution ( r ) = Ξ Λ N ( r ) U N dn e β + βµ d r N

7 Insertion/Deletion Trial Move 3. Analysis of Detailed Balance Forward-step dr min(, χ) 2 V Reverse-step min(, ) 2 N + χ old Detailed balance π i π ij = π j π ji βu + βµ N N βu + βµ ( N+ ) N+ r r r min(, χ) min(, ) dn = d ( N + ) χ e d d e d ΞΛ 2 V ΞΛ 2 N + new Limiting N π distribution ( r ) = Ξ Λ N ( r ) U N dn e β + βµ d r N

8 Insertion/Deletion Trial Move 3. Analysis of Detailed Balance Forward-step dr min(, χ) 2 V Reverse-step min(, ) 2 N + χ old Detailed balance π i π ij = π j π ji βu + βµ N N βu + βµ ( N+ ) N+ r r r min(, χ) min(, ) dn = d ( N + ) χ e d d e d ΞΛ 2 V ΞΛ 2 N + new Remember insert: (N+) = N old + delete: (N+) = N old old new U U e β e β χ = βµ V Λ ( N + ) V new old β( U U ) + βµ χ = e Λ ( N + Acceptance )

9 Biased Insertion/Deletion Trial Move. Specification Insertable region Trial-move algorithm. Choose with equal : Insertion identify region where insertion will not lead to overlap let the volume of this region be εv place randomly somewhere in this region Deletion select any molecule and delete it

20 Biased Insertion/Deletion Trial Move 2. Analysis of Trial Probabilities Detailed specification of moves and and probabilities Event [reverse event] Select insertion [select deletion ] Place molecule at r N+ [delete molecule N+] Probability [reverse ] ½ [½ ] dr/(εv) [/(N+)] Forward-step Reverse-step d min(, ) 2 r χ εv min(, ) 2 N + χ Accept move [accept move] min(,χ) [min(,/χ)] Only difference from unbiased algorithm

2 Biased Insertion/Deletion Trial Move old 3. Analysis of Detailed Balance Detailed balance π i π ij = π j π ji βu + βµ N N βu + βµ ( N+ ) N+ r r r min(, χ) min(, ) dn = d ( N + ) χ e d d e d ΞΛ 2 εv ΞΛ 2 N + new Remember insert: (N+) = N old + delete: (N+) = N old εv new old β( U U ) + βµ χ = e Λ ( N + ) Acceptance ε must be computed even when doing a deletion, since χ depends upon it for deletion, ε is computed for configuration after molecule is removed for insertion, ε is computed for configuration before molecule is inserted

22 Biased Insertion/Deletion Trial Move Advantage is gained when ε is small and for hard spheres near freezing βµ + ln( V / Λ N) : 6 7 ε : 0 χ : 4. Comments χ = εe is large (difficult to accept deletion without bias) (difficult to find acceptable insertion without bias) V e Λ ( N + ) βµ β ΔU e βµ Identifying and characterizing (computing ε) the non-overlap region may be difficult

23 Force-Bias Trial Move. Specification Move atom in preferentially in direction of lower energy select displacement δ r in a cubic volume centered on present position within this region, select with [ + λβf δr] C( f ) exp e p( δr) = = C = c x c y is a normalization constant + δ r max max λβ f δ r x xe c c x y c = e x xd( δ r ) = x δ r λβ f δ r y y f sinh( λβ f δ rmax ) λβ f λβ f δ r x x x 2δ r max Favors δr y in same direction as f y Pangali, Rao, and Berne, Chem. Phys. Lett. 47 43 (978)

An Aside: Sampling from a Distribution 24 Rejection method for sampling from a complex distribution p(x) write p(x) = Ca(x)b(x) a(x) is a simpler distribution b(x) lies between zero and unity recipe generate a uniform random variate U on (0,) generate a variate X on the distribution a(x) if U < b(x) then keep X if not, try again with a new U and X We wish to sample from p(x) = e qx for x = (-δ,+δ) we know how to sample on e q(x-x0) for x = (x 0, ) x= x0 qln[ U(0,)] use rejection method with a(x) = e q(x-δ) b(x) = 0 for x < -δ or x > +δ; otherwise i.e., sample on a(x) and reject values outside desired range

25 Force-Bias Trial Move 2. Analysis of Trial Probabilities Detailed specification of moves and and probabilities Event [reverse event] Select molecule k [select molecule k] Move to r new [move back to r old ] Probability [reverse ] /N [/N] p old (δr) [p new (-δr)] Forward-step Reverse-step p p old new ( δr) dr min(, χ) N ( δ r) dr min(, ) N χ Accept move [accept move] min(,χ) [min(,/χ)]

Force-Bias Trial Move 3. Analysis of Detailed Balance 26 Forward-step p old ( δr) dr min(, χ) N Reverse-step p new ( δ r) dr min(, ) N χ p( δr) [ + λβf δr] C( f ) exp = = λβ f δ r e x xe c c x y λβ f δ r y y Detailed balance π i π ij = π j π ji old old new new βu N + λβf δr βu N λβf δr e dr e e d e min(, χ) r = min(, ) Z old new N N C( ) ZN N χ f C( f ) Limiting N β ( r ) π ( r ) dr = e dr distribution Z N N U N N

Force-Bias Trial Move 3. Analysis of Detailed Balance 27 Forward-step p old ( δr) dr min(, χ) N Reverse-step p new ( δ r) dr min(, ) N χ p( δr) [ + λβf δr] C( f ) exp = = λβ f δ r e x xe c c x y λβ f δ r y y C old old new new βu + λβf δr βu λβf δr e Detailed balance π i π ij = π j π ji old old new new βu N + λβf δr βu N λβf δr e dr e e d e min(, χ) r = min(, ) Z old new N N C( ) ZN N χ f C( f ) χ = old new ( f ) C( f ) old C( f ) U U χ = new C( f ) ( ) ( ) e β λβ f + f δ r e new old new old Acceptance

Force-Bias Trial Move 4. Comments 28 Necessary to compute force both before and after move χ = C C old ( f ) new ( f ) new old new old ( U U ) ( ) e β λβ f + f δ r From definition of force f = U new old new old U U + δ λ = /2 makes argument of exponent nearly zero λ = 0 reduces to unbiased case Force-bias makes Monte Carlo more like molecular dynamics example of hybrid MC/MD method Improvement in convergence by factor or 2-3 observed worth the effort? 2 ( f f ) r

29 Association-Bias Trial Move. Specification Low-density, strongly attracting molecules when together, form strong associations that take long to break when apart, are slow to find each other to form associations performance of simulation is a problem Perform moves that put one molecule preferentially in vicinity of another suffer overlaps, maybe 50% of time compare to problem of finding associate only time in (say) 000 Must also preferentially attempt reverse move Attempt placement in this region, of volume εv

30 Association-Bias Trial Move. Specification With equal, choose a move: Association select a molecule that is not associated select another molecule (associated or not) put first molecule in volume ev in vicinity of second Dis-association select a molecule that is associated move it to a random position anywhere in the system

3 Association-Bias Trial Move 2. Analysis of Trial Probabilities Detailed specification of moves and and probabilities Event [reverse event] Select molecule k [select molecule k] Move to r new [move back to r old ] Probability [reverse ] /N un [/(N assoc +)] /(N assoc εv) (*) [/V] Forward-step min(, χ) NN u aεv Reverse-step min(, ) ( N + ) V χ a Accept move [accept move] min(,χ) [min(,/χ)] (*) incorrect

Association-Bias Trial Move 3. Analysis of Detailed Balance 32 Forward-step NNεV u a Reverse-step min(, χ) min(, ) ( N ) a + V χ old Detailed balance π i π ij = π j π ji βu N βu N e dr e dr min(, ) min(, ) Z χ N NuNaεV = Z N ( Na + ) V χ new new old Na ( U U ) N ( N ) u e β + χ = ε Acceptance a

Association-Bias Trial Move 4. Comments 33 This is incorrect! new old Na ( U U ) N N u e β + χ = ε a Need to account for full of positioning in r new This region has extra of being selected (in vicinity of two molecules) must look in local environment of position to see if it lies also in the neighborhood of other atoms add a /εv for each atom Algorithm requires to identify or keep track of number of associated/unassociated molecules

34 Using an Approximation Potential. Specification Evaluating the potential energy is the most time-consuming part of a simulation Some potentials are especially time-consuming, e.g. three-body potentials Ewald sum Idea: move system through Markov chain using an approximation to the real potential (cheaper to compute) at intervals, accept or reject entire subchain using correct potential True potential Approximate True potential

35 Approximation Potential 2. Analysis of Trial Probabilities What are π ij and π ji? State i 2 3 n State j Given that each elementary Markov step obeys detailed balance for the approximate potential one can show that the super-step i à j also obeys detailed balance (for the approximate potential) a ( n) a ( n) πi πij = π jπ ji very hard to analyze without this result would have to consider all paths from i to j to get transition

36 Approximation Potential 3. Analysis of Detailed Balance Formulate acceptance criterion to satisfy detailed balance for the real potential ( n) ( n) ππ i ij min(, χ) = πjπji min(,/ χ) Approximate-potential detailed balance a π a ( n) a ( n) j ( n) ( n) min(, ) min(,/ ) πi πij = π jπ ji i a π ji = j ji χ π i π χ π π State i 2 3 n State j

37 Approximation Potential 3. Analysis of Detailed Balance Formulate acceptance criterion to satisfy detailed balance for the real potential ( n) ( n) ππ i ij min(, χ) = πjπji min(,/ χ) Approximate-potential detailed balance a π a ( n) a ( n) j ( n) ( n) min(, ) min(,/ ) πi πij = π jπ ji i a π ji = j ji χ π i π χ π π State i 2 3 n State j

38 Approximation Potential 3. Analysis of Detailed Balance Formulate acceptance criterion to satisfy detailed balance for the real potential ( n) ( n) ij = j ji ππ min(, χ) π π min(,/ χ) i a j ( n) ( n) i a ji = j ji π i π π π min(, χ) π π min(,/ χ ) π π χ = π π j a j a i i Close to if approximate potential is good description of true potential State i 2 3 n State j

39 Summary Good Monte Carlo keeps the system moving among a wide variety of states At times sampling of wide distribution is not done well many states of comparable not easily reached few states of high hard to find and then escape Biasing the underlying transition probabilities can remedy problem add bias to underlying TPM remove bias in acceptance step so overall TPM is valid Examples insertion/deletion bias in GCMC force bias association bias using an approximate potential