Today: Fundamentals of Monte Carlo What is Monte Carlo? Named at Los Alamos in 940 s after the casino. Any method which uses (pseudo)random numbers as an essential part of the algorithm. Stochastic - not deterministic! A method for doing highly dimensional integrals by sampling the integrand. Often a Markov chain, called Metropolis MC. Simple example: Buffon s needle Monte Carlo determination of Consider a square inscribed by circle. Consider one quadrant. By geometry: area of /4 circle = r 2 /4 = /4. area of square r 2 Simple MC like throwing darts at unit square target: Using RNG (0,), pick a pair of coordinates (x,y). Count # of points (darts) in shaded section versus total. Pts in shaded circle/pts in square ~ /4. hits = 0 DO n=,n{ x = (random #) y = (random #) distance2 = (x 2 + y 2 ) If (distance2 ) hits = hits + } pi= 4 * hits/n 2
MC is advantageous for high dimensional integrals -the best general method Consider an integral in the unit D-dimensional hypercube: I dx...dx D f (x,...,x D ) By conventional deterministic methods: Lay out a grid with L points in each direction with h=/l Number of points is N =L D =h D CPU time. How does error scale with CPU time or Dimensionality? Error in trapezoidal rule goes as =f (x) h 2 since f(x)=f(x 0 ) + f (x 0 ) h + (/2)f (x 0 )h 2 + direct CPU time -D/2. (~h 2 and CPU~h D ) But by sampling we find -2. (~M -/2 and CPU~M) To get another decimal place takes 00 times longer! But MC is advantageous for D>4! 3 Improved Numerical Integration Integration methods in D-dimensions: Trapeziodal Rule: ~ f (2) (x) h 2 Simpson s Rule: ~ f (4) (x) h 4 generally: ~ f () (x) h And CPU time scales with sample points (grid size) CPU time ~ h -D (e.g., -D like /L and 2-D like /L 2 ) Time to do integral: T int ~ -D/ By conventional deterministic methods: Lay out a grid with L points in each direction with h=/l Number of points is N=h D =L D CPU time. Stochastic Integration (Monte Carlo) Monte Carlo: ~ M -/2 (sqrt of sampled points) CPU time: T ~ M -2. > In the limit of small, MC wins if D > 2! 4 2
4D 6D 8D Log(CPU Time) 2D MC Good Log(error) Bad Other reasons to do Monte Carlo: Conceptually and practically simple. Comes with built in error bars. Many methods of integration have been tried, and will be tried in this world of sin and woe. No one pretends that Monte Carlo is perfect or all-wise. Indeed, it has been said that Monte Carlo is the worst method except all those other methods that have been tried from time to time. Churchill 947 5 Probability Distributions P(x)dx= probability of observing a value in (x,x+dx) is a probability distribution function (p.d.f.) dx P(x) P(x) 0 x can be either a continuous or discrete variable. Cumulative distribution: Probability of x<y. y c( y) dx Useful for sampling P(x) 0 c( y) Average or expectation Moments: Zero th moment I 0 = Mean <x>=i Variance <(x-<x>) 2 >=I 2 -(I ) 2 g(x) g I n x n dx P(x)g(x) dx P(x)x n 6 3
Mappings of random variables Let p x (x)dx be a probability distribution Let y=g(x) be a new variable y=g(x) -- e.g., y=g(x) = ln(x) with 0 < x soy, 0 What is the pdf of y? With p y (y)dy=p x (x)dx p y (y) p x (x) dx dy p x(x) dx dg(x) p x(x) dg(x) dx x What happens when g is not monotonic? x Example: y=g(x) = ln(x) p y (y)dy p x (x) dg(x) dx dy e y dy *Distributed exponentially, like in Poisson event, e.g. y/ has PDF of e y 7 What is Mapping Doing? Generate random deviate p(x)dx dx 0 x 0 otherswise p(x) dx PDF is normalized Let p(y)=f(y), (y), then y(x) = F (x) )(functional inverse), dx/dy = f(y) or x=f(y) F(y) = 0y dy f(y ) Uniform deviate in x 0 y transformed deviate out p(y) = f(y) Allows a random deviate y from a known probability distribution p(y). The indefinite integral of p(y) must be known and invertible. A uniform deviate x is chosen from (0,) such that its corresponding y on the definite-integral curve is desired deviate, i.e. x = F(y). 8 4
Interpreting the Mapping Let p(y)=f(y), then y(x) = F (x), dx/dy = f(y) and x=f(y) Uniform deviate in x 0 y transformed deviate out F(y) = 0y dy f(y ) p(y) = f(y) Since F(y) is area under p(y)=f(y), y(x) = F (x) prescribes that Choose x=(0,], then find value y that has that fraction x of area to left of y, or x=f(y) Return that value of y. Example: Drawing from Poisson Distribution y= ln(x)/, p(y)= e y and x=f(y) = e y Note: x= e y or x = - x = e y (which is still x =(0,]). So, indeed, y(x) = ln(x )/ 9 Example: Drawing from Normal Gaussian p(y, y 2,...)dy dy 2... p(x, x 2,...) (x, x 2,...) (y, y 2,...) dy dy 2... Box-Muller method: get random deviates with normal distr. p(y)dy 2 e y 2 /2 dy Consider: uniform deviates (0,), x and x 2, and two quantities y and y 2. where y ln x cos2 x 2 y 2 ln x sin 2 x 2 Or, equivalently, x exp([y 2 y 2 2 ]/2) x 2 2 arctan(y 2 / y ) (x, x 2,...) 2 2 (y, y 2,...) 2 e y /2 2 e y 2 /2 So, each y is independently normal distributed. Better: Pick R 2 v 2 v 2 2 so x R 2 and (v,v 2 ) 2 x 2 Advantage: no sine and cosine by using v / R 2 and v 2 / R 2 and And get two RNG per calculation ( for now, 2 for next time) 0 5
Reminder: Gauss Central Limit Theorem Sample N values from p(x)dx, i.e. (X, X 2, X 3, X N ). Estimate mean from y = (/N) x i. What is the pdf of mean? Solve by fourier transforms. If you add together two random variables, you multiply together their characteristic functions: c x (k) e ikx dx P(x)e ikx so c x y (k) c x (k)c y (k) Then c x (k) c (k) N and c (k) c (k / N) N x... x N ) x y x Taylor expand ln[c y (k)] n (ik) n n! n cumulants Cumulants: n Mean = =<x> = x Variance= 2 =<(x-x) 2 > = 2 Skewness = 3 / 3 =<((x-x)/) 3 > Kurtosis= 4 / =<((x-x)/) 4 >-3 n What happens to the reduced d moments? n N n The n= moment remains invariant. The rest get reduced by higher powers of N. lim N c y (k) e ik k 2 2 /2N ik 3 3 /6N 2... P( y) (N /2 2 ) /2 exp N( y )2 2 2 Given enough averaging almost anything becomes a Gaussian distribution. 2 6
Approach to normality Gauss Central Limit Thm For any population distribution, the distribution of the mean will approach Gaussian. 3 Conditions on Central Limit Theorem I n x n dx P(x)x n We need the first three moments to exist. If I 0 is not defined not a pdf If I does not exist not mathematically well-posed. If I 2 does not exist infinite variance. Important to know if variance is finite for Monte Carlo. Divergence could happen because of tails of distribution I 2 x 2 dx P(x)x 2 We need: lim x x 3 P(x) 0 Divergence because of singular behavior of P at finite x: lim x0 xp(x) 0 4 7
Multidimensional Generalization Suppose r is an m dimensional vector from a multidimensional pdf: p(r)d m r. The mean is defined as before. The variance becomes the covariance, a positive symmetric m x m matrix : V i,j = <(x i -< x i >)(x j -< x j >)> For sufficiently large N, the estimated mean (y) will approach the distribution: P(y)dy 2 /2 N det(v) exp[(y i y i ) Nv 2 (y y j j )] 5 2d histogram of occurrences of means x 2 Off-diagonal components of ij are called the co-variance. Data can be uncorrelated, positively or negatively correlated depending on sign of ij Like a moment of inertia tensor 2 principal axes with variances Find axes with diagonalization or singular value decomposition x Individual error bars on x and x 2 can be misleading if correlated. 6 8