Tensorized Adaptive Biasing Force method for molecular dynamics
|
|
- Moses Rogers
- 6 years ago
- Views:
Transcription
1 Tensorized Adaptive Biasing Force method for molecular dynamics Virginie Ehrlacher 1, Tony Lelièvre 1 & Pierre Monmarché 2 1 CERMICS, Ecole des Ponts ParisTech & MATHERIALS project, INRIA 2 Laboratoire Jacques-Louis Lions, Université Pierre et Marie Curie IPAM, November / 37
2 Outline of the talk Introduction to molecular dynamics Free energy and standard ABF method Modified ABF method Tensor approximation 2 / 37
3 Outline of the talk Introduction to molecular dynamics Free energy and standard ABF method Modified ABF method Tensor approximation 3 / 37
4 Introduction The aim of molecular dynamics simulations is to understand the relationships between the macroscopic properties of a molecular system and its atomistic features. In particular, one would like to evaluate numerically macroscopic quantities from models at the microscopic scale. Many applications in various fields: biology, physics, chemistry, materials science. Example: Compute the pressure of a liquid at a given density and temperature. Various models: discrete state space (kinetic Monte Carlo, Markov State Model) or continuous state space (Langevin). 4 / 37
5 Main ingredient Consider a system composed of N atoms, whose positions are denoted by a vector (x 1,..., x N ) = x R 3N. The basic ingredient to describe the atomic system: a potential V which associates to a configuration(x 1,..., x N ) = x R 3N an energy V(x 1,..., x N ), and characterizes the interactions between the different atoms in the system. 5 / 37
6 Example of a biological system Courtesy of Chris Chipot, CNRS Nancy 6 / 37
7 Introduction Newton equations of motion: { dx t = M 1 P t dt, dp t = V(X t) dt, where X t = (X 1 t,, X N t ) are the positions (respectively P t = (P 1 t,, P N t ) are the momenta) of the N atoms composing the molecule, and M a diagonal matrix containing the masses of the different atoms. 7 / 37
8 Introduction Newton equations of motion + thermostat: Langevin dynamics: { dx t = M 1 P t dt, dp t = V(X t) dt γm 1 P t dt + 2γβ 1 dw t, where γ > 0, β = (k B T) 1 is proportional to the inverse of the temperature, and W t is a 3N-dimensional Brownian motion. 7 / 37
9 Introduction Newton equations of motion + thermostat: Langevin dynamics: { dx t = M 1 P t dt, dp t = V(X t) dt γm 1 P t dt + 2γβ 1 dw t, where γ > 0, β = (k B T) 1 is proportional to the inverse of the temperature, and W t is a 3N-dimensional Brownian motion. In the following, we focus on the over-damped Langevin (or gradient) dynamics: dx t = V(X t) dt + 2β 1 dw t, Time discretization (Euler-Maruyama scheme): t > 0 X n+1 = X n V(X n) t + 2 tβ 1 G n where G 0,, G n, are independent gaussian variables N(0, 1). 7 / 37
10 Ergodicity property The over-damped Langevin dynamics is ergodic with respect to the Gibbs measure µ V,β (dx) with where Z V,β = exp( βv(x)) dx. µ V,β (dx) = Z 1 V,β exp( βv(x)) dx, This property is extremely important to compute macroscopic quantities, in particular thermodynamic quantities (averages wrt µ V,β of some observables): stress, heat capacity,... Indeed, E µv,β (ϕ(x)) = R 3N ϕ(x)µ V,β (dx) = lim T + 1 T ϕ(x t) dt. T 0 Difficulty: V has several local minima. As a consequence, X t is a metastable process and µ V,β is a multimodal measure. 8 / 37
11 Potential energy landscapes 9 / 37
12 Metastability In practice, the dynamics is metastable: the system stays a long time in a well of V before jumping to another well: Timescale of the microscopic process: sec. Timescale of the macroscopic process: from 10 6 to 10 2 sec. Slow convergence of trajectorial averages Transitions between metastable states are rare events 10 / 37
13 Outline of the talk Introduction to molecular dynamics Free energy and standard ABF method Modified ABF method Tensor approximation 11 / 37
14 Introduction to molecular dynamics Free energy and standard ABF method Modified ABF method Tensor approximation Another example (Dellago & Geissler): dimer in solution solvent-solvent, ( solvent-monomer: ) truncated LJ on r = x i x j : σ 12 σ6 V WCA (r) = 4ǫ 2 if r σ, 0 otherwise (repulsive potential) r 12 r 6 monomer-monomer: double well on r = x 1 x 2 The distance between the two monomers evolves more slowly than the rest of the system. In practice, quantities of interest depend on a few variables only. 12 / 37
15 Reaction coordinates Let us assume that we are given an appropriate collection of slow variables: ξ(x), where ξ : R 3N R d, so that d N and ξ(x) enables to properly characterize coarse-grained degrees of freedom of the system. The vector ξ is called a vector of reaction coordinates or collective variables, and will be used to bias the dynamics (adaptive importance sampling technique). These reaction coordinates can either be intuited by chemists or physicists, or be identified by machine learning techniques (diffusion maps, nonlinear manifold learning...) Kevrekidis, Goldsmith, Arroyo, Clementi, / 37
16 Free energy (based on works by T. Lelièvre, F. Legoll, M. Rousset and G. Stoltz) In the general case, we have: The image of the measure µ V,β by ξ is a measure on R d : ξ µ V,β (dz) = exp( βa(z)) dz where the free energy A is defined as follows: ( ) z R d, A(z) = β 1 ln e βv(x) δ ξ(x) z (dx), S(z) with S(z) = {x, ξ(x) = z} R 3N. 14 / 37
17 Adaptive biasing techniques The bottom line of adaptive methods is the following: for well chosen ξ, the potential V A ξ is less rugged than V. Problem: A is unknown! Idea: use a time dependent potential of the form V t(x) = V(x) A t(ξ(x)) where A t is an approximation at time t of A, given the configurations visited so far. Hopes: build a dynamics which goes quickly to equilibrium, compute free energy profiles. Wang-Landau, ABF, metadynamics: Darve, Pohorille, Hénin, Chipot, Laio, Parrinello, Wang, Landau, / 37
18 Free energy biased potential A 2D example of a free energy biased trajectory: energetic barrier. ξ(x, y) = x y coordinate x coordinate x coordinate Time y coordinate x coordinate x coordinate Time 16 / 37
19 The standard ABF method For the sake of simplicity, let us assume here that d = 1. For the Adaptive Biasing Force (ABF) method, the idea is to use the formula za(z) = f(x) dµ Σ(z) = E µ[f(x) ξ(x) = z], Σ(z) where f is an explicit function depending on V, β, ξ. The mean force za(z) is the average of f with respect to µ S(z), where µ S(z) is the probability measure µ V,β conditioned to ξ(x) = z. Notice that actually, whatever A t is, f(x) e β(v At ξ) δ ξ(x) z Σ(z) za(z) =. e β(v At ξ) δ ξ(x) z Σ(z) 17 / 37
20 The standard ABF method Thus, we would like to simulate: { dx t = (V A ξ)(x t) dt + 2β 1 dw t, but A is unknown... za(z) = E µv,β (f(x) ξ(x) = z) 18 / 37
21 The standard ABF method Thus, we would like to simulate: { dx t = (V A ξ)(x t) dt + 2β 1 dw t, but A is unknown... za(z) = E µv,β (f(x) ξ(x) = z) The ABF dynamics reads: { dx t = (V A t ξ)(x t) dt + 2β 1 dw t, za t(z) = E(f(X t) ξ(x t) = z). 18 / 37
22 The standard ABF method Thus, we would like to simulate: { dx t = (V A ξ)(x t) dt + 2β 1 dw t, but A is unknown... za(z) = E µv,β (f(x) ξ(x) = z) The ABF dynamics reads: { dx t = (V A t ξ)(x t) dt + 2β 1 dw t, za t(z) = E(f(X t) ξ(x t) = z). [Lelièvre, Otto, Rousset, Stoltz, 2007]... Under suitable assumptions, it holds that za t converges (in some sense) to za as t goes to infinity. 18 / 37
23 Curse of dimensionality Problem: When the number of relevant reaction coordinates d becomes large (say a few dozens), standard ABF methods suffer from the curse of dimensionality. A(z) (or za(z)) has to be discretized on a grid whose size scales exponentially with respect to d: DIM = P 2 DIM = P 3 DIM = P d 19 / 37
24 Tensor approximation One method to circumvent this curse of dimensionality is to use tensor methods. Rough idea: approximate A(z) under the following form with n small (hopefully). A(z) n rk(z 1 1 ) rk d (z d ) k=1 DIM = npd Goal of the talk: Propose a modified ABF algorithm, well-suited for the use of tensor approximations, in order to circumvent the curse of dimensionality. 20 / 37
25 Outline of the talk Introduction to molecular dynamics Free energy and standard ABF method Modified ABF method Tensor approximation 21 / 37
26 Extended ABF method [Zhao, Fu, Lelièvre, Shao, Chipot, Cai, 2017]... From now on, for the sake of simplicity, we will assume that the molecular system is confined in a finite domain D x = [ L, L] N with periodic boundary conditions. Besides, let us assume that ξ : D x D z := Π d i=1d zi, where D zi is a closed bounded interval of R. The Extended ABF (EABF) method is a variant of the standard ABF method, which will be adapted in our context. It consists in introducing an auxiliary variable z D z and an auxiliary extended potential W δ : D x D z R defined by W δ (x, z) := V(x)+ 1 2δ for a given small parameterδ > 0. d ξ i (x) z i 2 i=1 22 / 37
27 Extended ABF method The idea of the EABF method is to apply a standard ABF method to the extended system (x, z) D x D z characterized by the interaction potential W δ with reaction coordinates z. The free energy of this system is then A δ (z) = 1 ) β ( D ln e βw δ (x,z) dx x Denoting by K δ (y, z) := Π d i=1e (y i z i )2 /δ, this implies that exp( βa δ (z)) = e βw δ(x,z) dx = K δ (y, z) exp( βa(y)) dy D x D z exp( βa δ (z)) is the convolution of exp( βa(z)) with a Gaussian kernel of width δ. A δ is thus an approximation of A for δ > 0 small. 23 / 37
28 Aim of the modified EABF method Our aim is to define for all time t 0 a bias function A t which will be easy to compute with tensor approximations; will converge in the long time limit to an approximation of A δ and to sample the process { dx t = xw δ (X t, Z t) dt + 2β 1 dw 1 t, dz t = zw δ (X t, Z t) dt + za t(z t)dt + 2β 1 dw 2 t, where Wt 1 and Wt 2 are two independent Brownian motions. 24 / 37
29 Equivalent reformulation of A δ Note that A δ (up to an additive constant) is the unique minimizer of J δ (f) := zw δ (x, z) zf(z) 2 e βw δ(x,z) dx dz D x D z over the set { } f H 1 (D z), f = 0 =: H 1,0 (D z). D z 25 / 37
30 General procedure of the (ideal) algorithm Let T > 0 and T n := nt for all n N. Set n = 0, t 0 = 0 and A 0 = 0; Suppose that A Tn has already been defined. Then, sample the process with A t = A Tn for time T n t T n+1. At time T n+1, define A Tn+1 := g n+1 where g n+1 is the unique minimizer of where J Tn+1 will be defined later. Set n := n+1. argmin J Tn+1 (f) (1) f H 1,0 (D z) 26 / 37
31 Unbiased empirical measure of the process At time t 0, a sample (X s, Z s) 0 s t of the stochastic process defined above is available. The unbiased empirical measure ν t is defined by t f(x 0 s, Zs)ws ds f(x, z) dν t(x, z) = t D x D z ws ds 0 where w s := exp( βa s(z s)). This empirical measure is expected to converge to µ Wδ,β e βw δ(x,z) as t goes to infinity. As a consequence, we would like to define J t(f) := zw δ (x, z) zf(z) 2 ν t(x, z) dx dz D x D z Problem: The measure ν t is singular so that the minimization problem (1) is not well-posed if we choose J t as above. We have to introduce another regularization! 27 / 37
32 Cost functional Let ǫ > 0 and let K ǫ(y, z) := Π d i=1e y i z i 2 /ǫ. The actual cost functional will be J t(f) := zw δ (x, y) zf(z) 2 ν t(x, y)k ǫ(y, z) dx dz dy, D x D z D z so that where and ( t F t(z) := J t(f) := F t(z) zf(z) 2 θ t(z) dz, D z ( t θ t(z) := 0 0 ) 1 t w s ds K ǫ(z s, z)w s ds 0 ) 1 t K ǫ(z s, z)w s ds zw δ (X s, Z s)k ǫ(z s, z) ds 0 28 / 37
33 Long-time convergence of the ideal algorithm Theorem (VE, Lelièvre, Monmarché, 2017) For all n N, minimization problem (1) is well-defined in the sense that there exists a unique minimizer. Besides, the following convergence holds almost surely A t(z) A δ,ǫ (z) 2 L 2 (D z) t + 0, where A δ,ǫ (z) is the unique minimizer in H 1,0 (D z) of the functional J δ,ǫ (f) := zw δ (x, y) zf(z) 2 e βw δ(x,y) K ǫ(y, z) dx dz dy. D x D z D z The proof relies on an adaptation of the so-called ODE method of Benaïm and Bréhier. 29 / 37
34 Outline of the talk Introduction to molecular dynamics Free energy and standard ABF method Modified ABF method Tensor approximation 30 / 37
35 Back to the minimization problem Each iteration of the algorithm requires the resolution of minimization problems of the following form: find g H 1,0 (D z) the unique minimizer of min J t(f), f H 1,0 (D z) where J t(f) := F t(z) zf(z) 2 θ t(z) dz. D z Recall that H 1,0 (D z) = { f H 1 (D z), } f = 0. D z When d is large, this minimization problem cannot be solved by standard methods. 31 / 37
36 General (standard) greedy algorithm Let R be a Hilbert space of functions depending on the d variables z 1,, z d, and let g = argmin f R J(f). (2) A standard tensor greedy algorithm for the resolution of problem (1) reads as follows: set n = 0 and g 0 = 0. Iteration n: compute h n+1 Σ solution to h n+1 argminj(g n + h). h Σ Define g n+1 := g n + h n+1 and set n := n+1; where Σ is a dictionary of pure tensor-product functions: Σ = {r 1 (z 1 ) r d (z d ), r 1 R 1,, r d R d } where R i is some Hilbert space of functions depending only on the variable z i. Remark: Other more advanced tensor formats can be chosen as dictionary (Tucker, Tensor Train, Hierarchical Tensor Train...) 32 / 37
37 Convergence of the greedy algorithm [Cancès, Lelièvre, VE, 2011], [Falco, Nouy, 2012] Under the following assumptions: (A1) J is strongly convex on R; (A2) J is differentiable and Lipschitz on bounded subsets of R; (A3) Σ is weakly closed in R; (A4) Span Σ is dense in R; it holds that (g n) n N strongly converges to g in R. 33 / 37
38 Back to the modified ABF context Recall that we are interested in finding g R the unique minimizer of min Jt(f), f R where and J t(f) := F t(z) zf(z) 2 θ t(z) dz. D z R = H 1,0 (D z) = { f H 1 (D z), } f = 0. D z What should be a proper choice of spaces R i to ensure the convergence of a greedy algorithm? Choices which { do not work: R i := r i H 1 (D zi ),, } D zi r i = 0 ; { R 1 = r 1 H 1 (D z1 ),, } D z1 r 1 = 0 and R i = H 1 (D zi ) for i 1; 34 / 37
39 Modified greedy algorithm Actually, we have to introduce d dictionaries: { } Σ i := r 1 (z 1 ) r i (z i ) r d (z d ), r i H 1,0 (D zi ), r j H 1 (D zj ), j i. A modified greedy algorithm is needed to have a converging sequence of approximations: set n = 0 and g 0 = 0. Iteration n: set g n,0 = g n. For i = 1,, d, compute h n+1,i Σ i solution to Set g n,i := g n,i 1 + h n+1,i. Set g n+1 = g n,d and n := n+1. h n+1,i argminj t(g n,i 1 + h). (3) h Σ i 35 / 37
40 Convergence of the modified PGD algorithm Theorem (VE, Lelièvre, Monmarché, 2017) Each iteration of the modified PGD algorithm is well-defined in the sense that there always exists at least one minimizer to (3). Besides, the sequence (g n) n N strongly converges in H 1 (D z) to g. 36 / 37
41 Conclusions and perspectives An idealized modified version of the ABF method, well-suited for tensor approximations, is proved to converge in the long time limit to some approximation of the free energy of the molecular system. A provably convergent greedy algorithm is proposed to approximate the solutions of the intermediate problems by tensors in the case when the number of reaction coordinates is significant. Numerical implementation of the algorithm on real molecular systems; Appropriate choice of δ, ǫ? Theoretical convergence results on an intertwinned ABF/greedy procedure? Thank you for your attention! 37 / 37
Effective dynamics for the (overdamped) Langevin equation
Effective dynamics for the (overdamped) Langevin equation Frédéric Legoll ENPC and INRIA joint work with T. Lelièvre (ENPC and INRIA) Enumath conference, MS Numerical methods for molecular dynamics EnuMath
More informationNumerical methods in molecular dynamics and multiscale problems
Numerical methods in molecular dynamics and multiscale problems Two examples T. Lelièvre CERMICS - Ecole des Ponts ParisTech & MicMac project-team - INRIA Horizon Maths December 2012 Introduction The aim
More informationMathematical analysis of an Adaptive Biasing Potential method for diffusions
Mathematical analysis of an Adaptive Biasing Potential method for diffusions Charles-Edouard Bréhier Joint work with Michel Benaïm (Neuchâtel, Switzerland) CNRS & Université Lyon 1, Institut Camille Jordan
More informationSDEs in large dimension and numerical methods Part 1: Sampling the canonical distribution
SDEs in large dimension and numerical methods Part 1: Sampling the canonical distribution T. Lelièvre CERMICS - Ecole des Ponts ParisTech & Matherials project-team - INRIA RICAM Winterschool, December
More informationGreedy algorithms for high-dimensional non-symmetric problems
Greedy algorithms for high-dimensional non-symmetric problems V. Ehrlacher Joint work with E. Cancès et T. Lelièvre Financial support from Michelin is acknowledged. CERMICS, Ecole des Ponts ParisTech &
More informationSampling the free energy surfaces of collective variables
Sampling the free energy surfaces of collective variables Jérôme Hénin Enhanced Sampling and Free-Energy Calculations Urbana, 12 September 2018 Please interrupt! struct bioinform phys chem theoretical
More informationMonte Carlo methods in molecular dynamics.
T. Lelièvre, Rencontres EDP/Proba, Octobre 2009 p. 1 Monte Carlo methods in molecular dynamics. Tony Lelièvre CERMICS, Ecole Nationale des Ponts et Chaussées, France, MicMac team, INRIA, France. http://cermics.enpc.fr/
More informationEnergie libre et dynamique réduite en dynamique moléculaire
Energie libre et dynamique réduite en dynamique moléculaire Frédéric Legoll Ecole des Ponts ParisTech et INRIA http://cermics.enpc.fr/ legoll Travaux en collaboration avec T. Lelièvre (Ecole des Ponts
More informationEstimating exit rate for rare event dynamical systems by extrapolation
Zuse Institute Berlin Takustr. 7 14195 Berlin Germany JANNES QUER 1 AND MARCUS WEBER 1 Estimating exit rate for rare event dynamical systems by extrapolation 1 Zuse Institute Berlin ZIB Report 15-54 (revised
More informationMonte Carlo methods for sampling-based Stochastic Optimization
Monte Carlo methods for sampling-based Stochastic Optimization Gersende FORT LTCI CNRS & Telecom ParisTech Paris, France Joint works with B. Jourdain, T. Lelièvre, G. Stoltz from ENPC and E. Kuhn from
More informationTwo recent works on molecular systems out of equilibrium
Two recent works on molecular systems out of equilibrium Frédéric Legoll ENPC and INRIA joint work with M. Dobson, T. Lelièvre, G. Stoltz (ENPC and INRIA), A. Iacobucci and S. Olla (Dauphine). CECAM workshop:
More informationSMAI-JCM SMAI Journal of Computational Mathematics
SMAI-JCM SMAI Journal of Computational Mathematics Long-time convergence of an adaptive biasing force method: Variance reduction by Helmholtz projection Houssam Alrachid & Tony Lelièvre Volume (05, p.
More informationApproches mathématiques pour la simulation multi-échelle des matériaux
p. 1/2 Approches mathématiques pour la simulation multi-échelle des matériaux Claude Le Bris Ecole des Ponts and Inria, FRANCE http://cermics.enpc.fr/ lebris based on a series of works by F. Legoll, T.
More informationIntroduction to multiscale modeling and simulation. Explicit methods for ODEs : forward Euler. y n+1 = y n + tf(y n ) dy dt = f(y), y(0) = y 0
Introduction to multiscale modeling and simulation Lecture 5 Numerical methods for ODEs, SDEs and PDEs The need for multiscale methods Two generic frameworks for multiscale computation Explicit methods
More informationPattern Recognition and Machine Learning. Bishop Chapter 11: Sampling Methods
Pattern Recognition and Machine Learning Chapter 11: Sampling Methods Elise Arnaud Jakob Verbeek May 22, 2008 Outline of the chapter 11.1 Basic Sampling Algorithms 11.2 Markov Chain Monte Carlo 11.3 Gibbs
More informationStochastic Particle Methods for Rarefied Gases
CCES Seminar WS 2/3 Stochastic Particle Methods for Rarefied Gases Julian Köllermeier RWTH Aachen University Supervisor: Prof. Dr. Manuel Torrilhon Center for Computational Engineering Science Mathematics
More informationPerturbed Proximal Gradient Algorithm
Perturbed Proximal Gradient Algorithm Gersende FORT LTCI, CNRS, Telecom ParisTech Université Paris-Saclay, 75013, Paris, France Large-scale inverse problems and optimization Applications to image processing
More informationGillespie s Algorithm and its Approximations. Des Higham Department of Mathematics and Statistics University of Strathclyde
Gillespie s Algorithm and its Approximations Des Higham Department of Mathematics and Statistics University of Strathclyde djh@maths.strath.ac.uk The Three Lectures 1 Gillespie s algorithm and its relation
More informationHypoelliptic multiscale Langevin diffusions and Slow fast stochastic reaction diffusion equations.
Hypoelliptic multiscale Langevin diffusions and Slow fast stochastic reaction diffusion equations. Wenqing Hu. 1 (Joint works with Michael Salins 2 and Konstantinos Spiliopoulos 3.) 1. Department of Mathematics
More informationAdaptive timestepping for SDEs with non-globally Lipschitz drift
Adaptive timestepping for SDEs with non-globally Lipschitz drift Mike Giles Wei Fang Mathematical Institute, University of Oxford Workshop on Numerical Schemes for SDEs and SPDEs Université Lille June
More informationAdvanced computational methods X Selected Topics: SGD
Advanced computational methods X071521-Selected Topics: SGD. In this lecture, we look at the stochastic gradient descent (SGD) method 1 An illustrating example The MNIST is a simple dataset of variety
More informationMD Thermodynamics. Lecture 12 3/26/18. Harvard SEAS AP 275 Atomistic Modeling of Materials Boris Kozinsky
MD Thermodynamics Lecture 1 3/6/18 1 Molecular dynamics The force depends on positions only (not velocities) Total energy is conserved (micro canonical evolution) Newton s equations of motion (second order
More informationOnsager theory: overview
Onsager theory: overview Pearu Peterson December 18, 2006 1 Introduction Our aim is to study matter that consists of large number of molecules. A complete mechanical description of such a system is practically
More informationInertial Game Dynamics
... Inertial Game Dynamics R. Laraki P. Mertikopoulos CNRS LAMSADE laboratory CNRS LIG laboratory ADGO'13 Playa Blanca, October 15, 2013 ... Motivation Main Idea: use second order tools to derive efficient
More informationSequential and reinforcement learning: Stochastic Optimization I
1 Sequential and reinforcement learning: Stochastic Optimization I Sequential and reinforcement learning: Stochastic Optimization I Summary This session describes the important and nowadays framework of
More informationFrom Newton s law to the linear Boltzmann equation without cut-off
From Newton s law to the linear Boltzmann equation without cut-off Nathalie Ayi (1) (1) Université Pierre et Marie Curie 27 Octobre 217 Nathalie AYI Séminaire LJLL 27 Octobre 217 1 / 35 Organisation of
More informationThese slides follow closely the (English) course textbook Pattern Recognition and Machine Learning by Christopher Bishop
Music and Machine Learning (IFT68 Winter 8) Prof. Douglas Eck, Université de Montréal These slides follow closely the (English) course textbook Pattern Recognition and Machine Learning by Christopher Bishop
More informationThermostatic Controls for Noisy Gradient Systems and Applications to Machine Learning
Thermostatic Controls for Noisy Gradient Systems and Applications to Machine Learning Ben Leimkuhler University of Edinburgh Joint work with C. Matthews (Chicago), G. Stoltz (ENPC-Paris), M. Tretyakov
More informationStochastic Differential Equations
Chapter 5 Stochastic Differential Equations We would like to introduce stochastic ODE s without going first through the machinery of stochastic integrals. 5.1 Itô Integrals and Itô Differential Equations
More informationNATIONAL UNIVERSITY OF SINGAPORE PC5215 NUMERICAL RECIPES WITH APPLICATIONS. (Semester I: AY ) Time Allowed: 2 Hours
NATIONAL UNIVERSITY OF SINGAPORE PC55 NUMERICAL RECIPES WITH APPLICATIONS (Semester I: AY 0-) Time Allowed: Hours INSTRUCTIONS TO CANDIDATES. This examination paper contains FIVE questions and comprises
More informationFrom Newton s law to the linear Boltzmann equation without cut-off
From Newton s law to the linear Boltzmann equation without cut-off Nathalie Ayi 1,2 1 Laboratoire J.A. Dieudonné, Université de Nice Sophia-Antipolis 2 Project COFFEE, INRIA Sophia Antipolis Méditerranée
More informationMonte Carlo. Lecture 15 4/9/18. Harvard SEAS AP 275 Atomistic Modeling of Materials Boris Kozinsky
Monte Carlo Lecture 15 4/9/18 1 Sampling with dynamics In Molecular Dynamics we simulate evolution of a system over time according to Newton s equations, conserving energy Averages (thermodynamic properties)
More informationSome Topics in Stochastic Partial Differential Equations
Some Topics in Stochastic Partial Differential Equations November 26, 2015 L Héritage de Kiyosi Itô en perspective Franco-Japonaise, Ambassade de France au Japon Plan of talk 1 Itô s SPDE 2 TDGL equation
More informationStochastic Optimization Algorithms Beyond SG
Stochastic Optimization Algorithms Beyond SG Frank E. Curtis 1, Lehigh University involving joint work with Léon Bottou, Facebook AI Research Jorge Nocedal, Northwestern University Optimization Methods
More informationStochastic Gradient Descent in Continuous Time
Stochastic Gradient Descent in Continuous Time Justin Sirignano University of Illinois at Urbana Champaign with Konstantinos Spiliopoulos (Boston University) 1 / 27 We consider a diffusion X t X = R m
More informationWang-Landau Monte Carlo simulation. Aleš Vítek IT4I, VP3
Wang-Landau Monte Carlo simulation Aleš Vítek IT4I, VP3 PART 1 Classical Monte Carlo Methods Outline 1. Statistical thermodynamics, ensembles 2. Numerical evaluation of integrals, crude Monte Carlo (MC)
More informationOn Kummer s distributions of type two and generalized Beta distributions
On Kummer s distributions of type two and generalized Beta distributions Marwa Hamza and Pierre Vallois 2 March 20, 205 Université de Lorraine, Institut de Mathématiques Elie Cartan, INRIA-BIGS, CNRS UMR
More informationA Central Limit Theorem for Fleming-Viot Particle Systems Application to the Adaptive Multilevel Splitting Algorithm
A Central Limit Theorem for Fleming-Viot Particle Systems Application to the Algorithm F. Cérou 1,2 B. Delyon 2 A. Guyader 3 M. Rousset 1,2 1 Inria Rennes Bretagne Atlantique 2 IRMAR, Université de Rennes
More informationVariance Reduction Techniques for Monte Carlo Simulations with Stochastic Volatility Models
Variance Reduction Techniques for Monte Carlo Simulations with Stochastic Volatility Models Jean-Pierre Fouque North Carolina State University SAMSI November 3, 5 1 References: Variance Reduction for Monte
More informationCompeting sources of variance reduction in parallel replica Monte Carlo, and optimization in the low temperature limit
Competing sources of variance reduction in parallel replica Monte Carlo, and optimization in the low temperature limit Paul Dupuis Division of Applied Mathematics Brown University IPAM (J. Doll, M. Snarski,
More informationEnergy Barriers and Rates - Transition State Theory for Physicists
Energy Barriers and Rates - Transition State Theory for Physicists Daniel C. Elton October 12, 2013 Useful relations 1 cal = 4.184 J 1 kcal mole 1 = 0.0434 ev per particle 1 kj mole 1 = 0.0104 ev per particle
More informationSTATISTICS 385: STOCHASTIC CALCULUS HOMEWORK ASSIGNMENT 4 DUE NOVEMBER 23, = (2n 1)(2n 3) 3 1.
STATISTICS 385: STOCHASTIC CALCULUS HOMEWORK ASSIGNMENT 4 DUE NOVEMBER 23, 26 Problem Normal Moments (A) Use the Itô formula and Brownian scaling to check that the even moments of the normal distribution
More informationBrownian motion and the Central Limit Theorem
Brownian motion and the Central Limit Theorem Amir Bar January 4, 3 Based on Shang-Keng Ma, Statistical Mechanics, sections.,.7 and the course s notes section 6. Introduction In this tutorial we shall
More informationImportance splitting for rare event simulation
Importance splitting for rare event simulation F. Cerou Frederic.Cerou@inria.fr Inria Rennes Bretagne Atlantique Simulation of hybrid dynamical systems and applications to molecular dynamics September
More informationfor Global Optimization with a Square-Root Cooling Schedule Faming Liang Simulated Stochastic Approximation Annealing for Global Optim
Simulated Stochastic Approximation Annealing for Global Optimization with a Square-Root Cooling Schedule Abstract Simulated annealing has been widely used in the solution of optimization problems. As known
More informationThe dynamics of small particles whose size is roughly 1 µmt or. smaller, in a fluid at room temperature, is extremely erratic, and is
1 I. BROWNIAN MOTION The dynamics of small particles whose size is roughly 1 µmt or smaller, in a fluid at room temperature, is extremely erratic, and is called Brownian motion. The velocity of such particles
More informationAdvanced sampling. fluids of strongly orientation-dependent interactions (e.g., dipoles, hydrogen bonds)
Advanced sampling ChE210D Today's lecture: methods for facilitating equilibration and sampling in complex, frustrated, or slow-evolving systems Difficult-to-simulate systems Practically speaking, one is
More informationLecture 14 : Online Learning, Stochastic Gradient Descent, Perceptron
CS446: Machine Learning, Fall 2017 Lecture 14 : Online Learning, Stochastic Gradient Descent, Perceptron Lecturer: Sanmi Koyejo Scribe: Ke Wang, Oct. 24th, 2017 Agenda Recap: SVM and Hinge loss, Representer
More informationG : Statistical Mechanics
G25.2651: Statistical Mechanics Notes for Lecture 1 Defining statistical mechanics: Statistical Mechanics provies the connection between microscopic motion of individual atoms of matter and macroscopically
More informationTemperature and Pressure Controls
Ensembles Temperature and Pressure Controls 1. (E, V, N) microcanonical (constant energy) 2. (T, V, N) canonical, constant volume 3. (T, P N) constant pressure 4. (T, V, µ) grand canonical #2, 3 or 4 are
More informationOn continuous time contract theory
Ecole Polytechnique, France Journée de rentrée du CMAP, 3 octobre, 218 Outline 1 2 Semimartingale measures on the canonical space Random horizon 2nd order backward SDEs (Static) Principal-Agent Problem
More informationA mathematical framework for Exact Milestoning
A mathematical framework for Exact Milestoning David Aristoff (joint work with Juan M. Bello-Rivas and Ron Elber) Colorado State University July 2015 D. Aristoff (Colorado State University) July 2015 1
More informationIntroduction to Machine Learning
Introduction to Machine Learning Brown University CSCI 1950-F, Spring 2012 Prof. Erik Sudderth Lecture 25: Markov Chain Monte Carlo (MCMC) Course Review and Advanced Topics Many figures courtesy Kevin
More informationChE 503 A. Z. Panagiotopoulos 1
ChE 503 A. Z. Panagiotopoulos 1 STATISTICAL MECHANICAL ENSEMLES 1 MICROSCOPIC AND MACROSCOPIC ARIALES The central question in Statistical Mechanics can be phrased as follows: If particles (atoms, molecules,
More informationAccelerating Stochastic Optimization
Accelerating Stochastic Optimization Shai Shalev-Shwartz School of CS and Engineering, The Hebrew University of Jerusalem and Mobileye Master Class at Tel-Aviv, Tel-Aviv University, November 2014 Shalev-Shwartz
More informationKinetic Monte Carlo (KMC)
Kinetic Monte Carlo (KMC) Molecular Dynamics (MD): high-frequency motion dictate the time-step (e.g., vibrations). Time step is short: pico-seconds. Direct Monte Carlo (MC): stochastic (non-deterministic)
More informationExtending the Tools of Chemical Reaction Engineering to the Molecular Scale
Extending the Tools of Chemical Reaction Engineering to the Molecular Scale Multiple-time-scale order reduction for stochastic kinetics James B. Rawlings Department of Chemical and Biological Engineering
More informationMalliavin Calculus in Finance
Malliavin Calculus in Finance Peter K. Friz 1 Greeks and the logarithmic derivative trick Model an underlying assent by a Markov process with values in R m with dynamics described by the SDE dx t = b(x
More informationOptimization Methods via Simulation
Optimization Methods via Simulation Optimization problems are very important in science, engineering, industry,. Examples: Traveling salesman problem Circuit-board design Car-Parrinello ab initio MD Protein
More informationConvergence of spectral measures and eigenvalue rigidity
Convergence of spectral measures and eigenvalue rigidity Elizabeth Meckes Case Western Reserve University ICERM, March 1, 2018 Macroscopic scale: the empirical spectral measure Macroscopic scale: the empirical
More informationNon-Convex Optimization. CS6787 Lecture 7 Fall 2017
Non-Convex Optimization CS6787 Lecture 7 Fall 2017 First some words about grading I sent out a bunch of grades on the course management system Everyone should have all their grades in Not including paper
More informationNonlinear representation, backward SDEs, and application to the Principal-Agent problem
Nonlinear representation, backward SDEs, and application to the Principal-Agent problem Ecole Polytechnique, France April 4, 218 Outline The Principal-Agent problem Formulation 1 The Principal-Agent problem
More informationCIS 520: Machine Learning Oct 09, Kernel Methods
CIS 520: Machine Learning Oct 09, 207 Kernel Methods Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture They may or may not cover all the material discussed
More informationNon equilibrium thermodynamics: foundations, scope, and extension to the meso scale. Miguel Rubi
Non equilibrium thermodynamics: foundations, scope, and extension to the meso scale Miguel Rubi References S.R. de Groot and P. Mazur, Non equilibrium Thermodynamics, Dover, New York, 1984 J.M. Vilar and
More informationSampling multimodal densities in high dimensional sampling space
Sampling multimodal densities in high dimensional sampling space Gersende FORT LTCI, CNRS & Telecom ParisTech Paris, France Journées MAS Toulouse, Août 4 Introduction Sample from a target distribution
More informationGeneralized greedy algorithms.
Generalized greedy algorithms. François-Xavier Dupé & Sandrine Anthoine LIF & I2M Aix-Marseille Université - CNRS - Ecole Centrale Marseille, Marseille ANR Greta Séminaire Parisien des Mathématiques Appliquées
More informationStatistical Thermodynamics and Monte-Carlo Evgenii B. Rudnyi and Jan G. Korvink IMTEK Albert Ludwig University Freiburg, Germany
Statistical Thermodynamics and Monte-Carlo Evgenii B. Rudnyi and Jan G. Korvink IMTEK Albert Ludwig University Freiburg, Germany Preliminaries Learning Goals From Micro to Macro Statistical Mechanics (Statistical
More informationIntroduction. Statistical physics: microscopic foundation of thermodynamics degrees of freedom 2 3 state variables!
Introduction Thermodynamics: phenomenological description of equilibrium bulk properties of matter in terms of only a few state variables and thermodynamical laws. Statistical physics: microscopic foundation
More informationmacroscopic view (phenomenological) microscopic view (atomistic) computing reaction rate rate of reactions experiments thermodynamics
Rate Theory (overview) macroscopic view (phenomenological) rate of reactions experiments thermodynamics Van t Hoff & Arrhenius equation microscopic view (atomistic) statistical mechanics transition state
More informationCross-diffusion models in Ecology
Cross-diffusion models in Ecology Gonzalo Galiano Dpt. of Mathematics -University of Oviedo (University of Oviedo) Review on cross-diffusion 1 / 42 Outline 1 Introduction: The SKT and BT models The BT
More informationThe parallel replica method for Markov chains
The parallel replica method for Markov chains David Aristoff (joint work with T Lelièvre and G Simpson) Colorado State University March 2015 D Aristoff (Colorado State University) March 2015 1 / 29 Introduction
More informationIntroduction to Machine Learning (67577) Lecture 7
Introduction to Machine Learning (67577) Lecture 7 Shai Shalev-Shwartz School of CS and Engineering, The Hebrew University of Jerusalem Solving Convex Problems using SGD and RLM Shai Shalev-Shwartz (Hebrew
More informationA mathematical introduction to coarse-grained stochastic dynamics
A mathematical introduction to coarse-grained stochastic dynamics Gabriel STOLTZ gabriel.stoltz@enpc.fr (CERMICS, Ecole des Ponts & MATHERIALS team, INRIA Paris) Work also supported by ANR Funding ANR-14-CE23-0012
More informationSolution of Stochastic Nonlinear PDEs Using Wiener-Hermite Expansion of High Orders
Solution of Stochastic Nonlinear PDEs Using Wiener-Hermite Expansion of High Orders Dr. Mohamed El-Beltagy 1,2 Joint Wor with Late Prof. Magdy El-Tawil 2 1 Effat University, Engineering College, Electrical
More informationMetastability for the Ginzburg Landau equation with space time white noise
Barbara Gentz gentz@math.uni-bielefeld.de http://www.math.uni-bielefeld.de/ gentz Metastability for the Ginzburg Landau equation with space time white noise Barbara Gentz University of Bielefeld, Germany
More informationComputer simulations as concrete models for student reasoning
Computer simulations as concrete models for student reasoning Jan Tobochnik Department of Physics Kalamazoo College Kalamazoo MI 49006 In many thermal physics courses, students become preoccupied with
More informationOn oscillating systems of interacting neurons
Susanne Ditlevsen Eva Löcherbach Nice, September 2016 Oscillations Most biological systems are characterized by an intrinsic periodicity : 24h for the day-night rhythm of most animals, etc... In other
More informationDynamical systems with Gaussian and Levy noise: analytical and stochastic approaches
Dynamical systems with Gaussian and Levy noise: analytical and stochastic approaches Noise is often considered as some disturbing component of the system. In particular physical situations, noise becomes
More informationLecture 10: Finite Differences for ODEs & Nonlinear Equations
Lecture 10: Finite Differences for ODEs & Nonlinear Equations J.K. Ryan@tudelft.nl WI3097TU Delft Institute of Applied Mathematics Delft University of Technology 21 November 2012 () Finite Differences
More informationMachine Learning. Lecture 04: Logistic and Softmax Regression. Nevin L. Zhang
Machine Learning Lecture 04: Logistic and Softmax Regression Nevin L. Zhang lzhang@cse.ust.hk Department of Computer Science and Engineering The Hong Kong University of Science and Technology This set
More informationBrownian Motion. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Brownian Motion
Brownian Motion An Undergraduate Introduction to Financial Mathematics J. Robert Buchanan 2010 Background We have already seen that the limiting behavior of a discrete random walk yields a derivation of
More informationRiemann Manifold Methods in Bayesian Statistics
Ricardo Ehlers ehlers@icmc.usp.br Applied Maths and Stats University of São Paulo, Brazil Working Group in Statistical Learning University College Dublin September 2015 Bayesian inference is based on Bayes
More informationNeural Networks and Deep Learning
Neural Networks and Deep Learning Professor Ameet Talwalkar November 12, 2015 Professor Ameet Talwalkar Neural Networks and Deep Learning November 12, 2015 1 / 16 Outline 1 Review of last lecture AdaBoost
More informationFree energy computations by minimization of. Kullback-Leibler divergence: an efficient. adaptive biasing potential method for sparse.
Free energy computations by minimization of Kullback-Leibler divergence: an efficient adaptive biasing potential method for sparse representations I. Bilionis Center for Applied Mathematics, Cornell University,
More informationApproximate Dynamic Programming
Approximate Dynamic Programming A. LAZARIC (SequeL Team @INRIA-Lille) ENS Cachan - Master 2 MVA SequeL INRIA Lille MVA-RL Course Approximate Dynamic Programming (a.k.a. Batch Reinforcement Learning) A.
More informationStochastic Subgradient Method
Stochastic Subgradient Method Lingjie Weng, Yutian Chen Bren School of Information and Computer Science UC Irvine Subgradient Recall basic inequality for convex differentiable f : f y f x + f x T (y x)
More informationA picture of the energy landscape of! deep neural networks
A picture of the energy landscape of! deep neural networks Pratik Chaudhari December 15, 2017 UCLA VISION LAB 1 Dy (x; w) = (w p (w p 1 (... (w 1 x))...)) w = argmin w (x,y ) 2 D kx 1 {y =i } log Dy i
More informationSUMMER SCHOOL - KINETIC EQUATIONS LECTURE II: CONFINED CLASSICAL TRANSPORT Shanghai, 2011.
SUMMER SCHOOL - KINETIC EQUATIONS LECTURE II: CONFINED CLASSICAL TRANSPORT Shanghai, 2011. C. Ringhofer ringhofer@asu.edu, math.la.asu.edu/ chris OVERVIEW Quantum and classical description of many particle
More informationModeling of Micro-Fluidics by a Dissipative Particle Dynamics Method. Justyna Czerwinska
Modeling of Micro-Fluidics by a Dissipative Particle Dynamics Method Justyna Czerwinska Scales and Physical Models years Time hours Engineering Design Limit Process Design minutes Continious Mechanics
More informationInterest Rate Models:
1/17 Interest Rate Models: from Parametric Statistics to Infinite Dimensional Stochastic Analysis René Carmona Bendheim Center for Finance ORFE & PACM, Princeton University email: rcarmna@princeton.edu
More informationInformation theoretic perspectives on learning algorithms
Information theoretic perspectives on learning algorithms Varun Jog University of Wisconsin - Madison Departments of ECE and Mathematics Shannon Channel Hangout! May 8, 2018 Jointly with Adrian Tovar-Lopez
More informationBridging the Gap between Center and Tail for Multiscale Processes
Bridging the Gap between Center and Tail for Multiscale Processes Matthew R. Morse Department of Mathematics and Statistics Boston University BU-Keio 2016, August 16 Matthew R. Morse (BU) Moderate Deviations
More informationMathematical Methods - Lecture 9
Mathematical Methods - Lecture 9 Yuliya Tarabalka Inria Sophia-Antipolis Méditerranée, Titane team, http://www-sop.inria.fr/members/yuliya.tarabalka/ Tel.: +33 (0)4 92 38 77 09 email: yuliya.tarabalka@inria.fr
More informationMollifying Networks. ICLR,2017 Presenter: Arshdeep Sekhon & Be
Mollifying Networks Caglar Gulcehre 1 Marcin Moczulski 2 Francesco Visin 3 Yoshua Bengio 1 1 University of Montreal, 2 University of Oxford, 3 Politecnico di Milano ICLR,2017 Presenter: Arshdeep Sekhon
More informationOptimization and Gradient Descent
Optimization and Gradient Descent INFO-4604, Applied Machine Learning University of Colorado Boulder September 12, 2017 Prof. Michael Paul Prediction Functions Remember: a prediction function is the function
More informationMathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )
Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca November 26th, 2014 E. Tanré (INRIA - Team Tosca) Mathematical
More informationSplitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches
Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches Patrick L. Combettes joint work with J.-C. Pesquet) Laboratoire Jacques-Louis Lions Faculté de Mathématiques
More informationNanoscale simulation lectures Statistical Mechanics
Nanoscale simulation lectures 2008 Lectures: Thursdays 4 to 6 PM Course contents: - Thermodynamics and statistical mechanics - Structure and scattering - Mean-field approaches - Inhomogeneous systems -
More informationManifold Monte Carlo Methods
Manifold Monte Carlo Methods Mark Girolami Department of Statistical Science University College London Joint work with Ben Calderhead Research Section Ordinary Meeting The Royal Statistical Society October
More informationMonte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan
Monte-Carlo MMD-MA, Université Paris-Dauphine Xiaolu Tan tan@ceremade.dauphine.fr Septembre 2015 Contents 1 Introduction 1 1.1 The principle.................................. 1 1.2 The error analysis
More information