MONTE CARLO METHOD Reference1: Smit Frenkel, Understanding molecular simulation, second edition, Academic press, 2002. Reference 2: David P. Landau., Kurt Binder., A Guide to Monte Carlo Simulations in Statistical Physics Third Edition, Cambridge University Press, 2009. Reference 3: Akira Satoh, Introduction to Practice of Molecular Simulation, Elsevier Inc., 2011. ١
MONTE-CARLO METHODS: have been invented in the context of the development of the atomic bomb in the 1940 s. are a class of computational algorithms can be applied to vast ranges of problems are not a statistical tool rely on repeated random sampling provide generally approximate solutions are used in cases where analytical or numerical solutions don t exist or are too difficult to implement can be used by the Lazy Scientist even when an analytical or numerical solution can be implemented ٢
Monte-Carlo methods generally follow the following steps: 1. Determine the statistical properties of possible inputs 2. Generate many sets of possible inputs which follows the above properties 3. Perform a deterministic calculation with these sets 4. Analyze statistically the results. The error on the results typically decreases as 1/ N ٣
Applications: 1. Numerical integration Most problems can be solved by integration Monte-Carlo integration is the most common application of Monte-Carlo methods Basic idea: Do not use a fixed grid, but random points, because: 1. Curse of dimensionality: a fixed grid in D dimensions requires ND points 2. The step size must be chosen first ۴
Example 1: Evaluation of definite integrals A straightforward Monte Carlo solution to this problem via the hit-or-miss (or acceptance rejection) method. draw a box extending from a to b and from 0 to y0 where yo > f (x) throughout this interval. Using random numbers drawn from a uniform distribution, drop N points randomly into the box and count the number, No, which fall below f (x) for each value of x. An estimate for the integral is then given by the fraction of points which fall below the curve times the area of the box, i.e. This estimate becomes increasingly precise as N and will eventually converge to the correct answer. ۵
Example 2: How to calculate π? 1. Draw N point (x, y) uniformly at random in a square 2. Count the C points for which x^ 2 + y^ 2 < 1 3. The ratio C/N converges towards π/4 as N ^ 1/2 ۶
2. Optimization problems Numerical solutions to optimization problems incur the risk of getting stuck in local minima. Monte-Carlo approach can alleviate the problem by permitting random exit from the local minimum and find another, hopefully better minimum. ٧
Homework: Calculate the following integral using MC method. y= x 3 dx 2 0 ٨
Grand Canonical Monte Carlo Method ٩
Markov Chain In 1907, A. A. Markov began the study of an important new type of chance process. In this process, the outcome of a given experiment can affect the outcome of the next experiment. This type of process is called a Markov chain. Modern probability theory studies chance processes for which the knowledge of previous outcomes influences predictions for future experiments. In principle, when we observe a sequence of chance experiments, all of the past outcomes could influence our predictions for the next experiment. For example, this should be the case in predicting a student s grades on a sequence of exams in a course. Markov chains, are mathematical systems that hop from one "state" (a situation or set of values) to another. For example, if you made a Markov chain model of a baby's behavior, you might include "playing," "eating", "sleeping," and "crying" as states, which together with other behaviors could form a 'state space': a list of all possible states. In addition, on top of the state space, a Markov chain tells you the probabilitiy of hopping, or "transitioning," from one state to any other state---e.g., the chance that a baby currently playing will fall asleep in the next five minutes without crying first. ١٠
Monte Carlo Method In the MD method, the motion of molecules (particles) is simulated according to the equations of motion and therefore it is applicable to both thermodynamic equilibrium and nonequilibrium phenomena. In contrast, the MC method generates a series of microscopic states under a certain stochastic law, irrespective of the equations of motion of particles. Since the MC method does not use the equations of motion, it cannot include the concept of explicit time, and thus is only a simulation technique for phenomena in thermodynamic equilibrium. Hence, it is unsuitable for the MC method to deal with the dynamic properties of a system, which are dependent on time. Reference: Akira Satoh, Introduction to Practice of Molecular Simulation, Elsevier Inc., 2011. ١١
The Ensemble: The microcanonical ensemble: where N, the total energy E and V are kept constant in each cell. In fact, this is a very simple ensemble because energy cannot flow from one cell to another. Isothermal-isobaric ensemble: N, T and the pressure p are kept constant. Grand canonical ensemble: where V, T and the chemical potential are kept constant. The grand canonical ensemble is a fascinating one because the number of particles is allowed to fluctuate. The canonical ensemble: N, V and T are kept constant. ١٢
١٣
Same algorithm with another reference: ١۴
Ref: Understanding molecular simulation, second edition, Smit Frenkel, Academic press, 2002. ١۵
١۶
Accept or Reject? ١٧
Canonical Ensemble ١٨
GCMC Algorithm ١٩
The main procedure for the MC simulation of a nonspherical particle system is as follows: 1. Specify the initial position and direction of all particles. 2. Regard this state as microscopic state i, and calculate the interaction energy Ui. 3. Choose an arbitrary particle in order or randomly and call this particle particle α. 4. Make particle α move translationally using random numbers and calculate the interaction energy Uj for this new configuration. 5. Adopt this new microscopic state for the case of Uj Ui and go to step 7. 6. Calculate ρj/ρi in Eq. (1) for the case of Uj >Ui and take a random number R1 from a uniform random number sequence distributed from zero to unity. ٢٠
6.1. If R1 ρj/ρi, adopt this microscopic state j and go to step 7. 6.2. If R1 > ρj/ρi, reject this microscopic state, regard previous state i as new microscopic state j, and go to step 7. 7. Change the direction of particle α using random numbers and calculate the interaction energy Uk for this new state. 8. If Uk Uj, adopt this new microscopic state and repeat from step 2. 9. If Uk > Uj, calculate ρk/ρj in Eq. (1) and take a random number R2 from the uniform random number sequence. 9.1. If R2 ρk/ρj, adopt this new microscopic state k and repeat from step 2. 9.2. If R2 > ρk/ρj, reject this new state, regard previous state j as new microscopic state k, and repeat from step 2. ٢١
Typical overlap regime of the particles Overlap in the general situation ٢٢
Although the treatment of the translational and rotational changes is carried out separately in the above algorithm, a simultaneous procedure is also possible in such a way that the position and direction of an arbitrary particle are simultaneously changed, and the new microscopic state is adopted. However, for a strongly interacting system, the separate treatment may be found to be more effective in many cases. ٢٣
Appendix G, Frenkel et al. ٢۴
Calculation of chemical potential, Sadus et al., The most common approach for calculating the chemical potential is the Widom test particle method (Widom, 1963). The Widom method involves inserting a ghost particle (i) randomly into the ensemble and calculating the energy of its interaction (Ei,test) with the particles of the ensemble. For a canonical ensemble, the residual chemical potential (i.e., the chemical potential minus the contribution from the ideal gas) is obtained subsequently from the following ensemble average. ٢۵
Strictly the Widom Equation is only valid for the canonical ensemble. In the NPT ensemble, density variations mean that the following should be used: ٢۶
Paper Example: ٢٧
Framework structures of zeolites ٢٨
٢٩