Mix & Match Hamiltonian Monte Carlo
|
|
- Amberly Cunningham
- 5 years ago
- Views:
Transcription
1 Mix & Match Hamiltonian Monte Carlo Elena Akhmatskaya,2 and Tijana Radivojević BCAM - Basque Center for Applied Mathematics, Bilbao, Spain 2 IKERBASQUE, Basque Foundation for Science, Bilbao, Spain MCMSki V, Lenzerheide, Switzerland January 5, 206
2 Preview We introduce an alternative to Hamiltonian Monte Carlo (HMC) for efficient sampling in statistical simulations The new method called Mix & Match Hamiltonian Monte Carlo (MMHMC) has been inspired by Generalized Shadow Hybrid Monte Carlo (GSHMC ) by Akhmatskaya & Reich MMHMC: is generalized HMC that samples with modified Hamiltonians offers computationally effective Metropolis test for momentum update reduces potential negative effects of momentum flips relies on the method- and system- specific adaptive integration scheme and compatible modified Hamiltonians outperforms in sampling efficiency the advanced sampling techniques for computational statistics such as HMC and Riemann Manifold HMC efficient for sampling in multidimensional space
3 Behind the scenes : GSHMC was introduced for sampling in molecular simulation published: Akhmatskaya, Reich (2008), JCOMP, 227, 4934; Akhmatskaya, Bou-Rabee, Reich (2009), JCOMP, 228 (6), 2256 patented: GB patent (2009), US patent (20) [Fujitsu, Authors: Akhmatskaya, Reich] proved to be successful in simulations of complex molecular systems in Biology and Chemistry (6 publications) No implementation and testing in statistical computation till 205 Never been implemented in open source software due to patent restrictions November 205: Fujitsu issued the license giving a permission (i) to use the patented method in open source software (ii) to EA to implement and use know-how Current status: GSHMC has been modified and adapted to statistical applications to give birth to MMHMC. Implemented in BCAM in-house software [Radivojević, Akhmatskaya, preprint]
4 MMHMC features For the target density π(θ) of position vector θ, momenta p conjugate to θ, with a mass matrix M (a preconditioner) construct a potential function U(θ) = log(π(θ)) and a Hamiltonian H(θ, p) = U(θ) + 2 pt M p Sampling is performed with respect to a modified canonical density π(θ, p) exp( H h (θ, p)) E. g., the modified Hamiltonian of order m = 4 for the Verlet method H h (θ, p) = H(θ, p) + h2 ( 2p M U θθ (θ)m p U θ (θ) M U θ (θ) ) 24
5 MMHMC algorithm (θ, p) Partial Momentum Monte Carlo (PMMC) PMMC ϕp + ϕu p = p with probability P = min{, exp( Ĥh)} otherwise u N (0, M) is the noise, ϕ [0, ] p Hamiltonian Dynamics Ĥh = h2 (6b ) ( ϕa + 2 ) ϕ( ϕ)b 24 A = u M U θθ (θ)m u p M U θθ (θ)m p B = u M U θθ (θ)m p b is the integrator s parameter (θ, p ) The extended Hamiltonian w Metropolis test A (θ new, p new ) R F Ĥ h (θ, p, u) = H h (θ, p) + 2 u M u defines the extended reference density ˆπ(θ, p, u) exp( Ĥh(θ, p, u))
6 MMHMC algorithm (θ, p) PMMC p Metropolis test { (θ (θ new, p new ) =, p ) accept with prob. α F(θ, p ) reject otherwise α = min {, exp( H { h )} (θ, p) Flip F(θ, p) = reduced flip (optionally) Hamiltonian Dynamics (θ, p ) Re-weighting (w) For every n =,..., N stores w n = exp ( H h (θ n, p n ) H(θ n, p n ) ) w Metropolis test A (θ new, p new ) R F N n= Ω = w nω n N n= w n Ω n, n =, 2,..., N - values of observables along a sequence of states (θ n, p n)
7 What to expect? Pros enhanced sampling High acceptance rates H h conserved by symplectic integrators better than H Access to second-order information about the target distribution Extra parameter for performance enhancing Faster convergence to the target PDF Cons extra computational cost Computation of H h for each proposal (higher orders H h are more expensive) Extra Metropolis test for momentum update Accurate numerical integrators required to use low orders H h for systems with highly oscillatory H Our strategy To find the numerical integrator that provides the best conservation of modified energy. Search within the family of 2-stage splitting integrating schemes.
8 Modified Hamiltonians for splitting integrators 4-th order modified Hamiltonian for 2-stage splitting integrators were derived in terms of quantities available during simulation by applying the Baker-Campbell-Hausdorff (BCH) formula iteratively H h (θ, p) = U(θ) + K(p) + h 2( ) αp M Uθ (θ) + βu θ (θ) M U θ (θ) U θ (θ) - numerical time-derivative of U θ (θ) α = 6b 24, β = 6b2 6b + 2 b - parameter of an integrator
9 Adaptive integrators for optimal conservation of modified energy Consider one parameter 2-stage splitting integrator of Hamiltonian system with H(θ, p) = U(θ) + 2 pt M p = A + B: ψ h = ϕ B bh ϕ A h ϕ B ( 2b)h ϕ A h ϕ B bh 2 2 Our Adaptive Integration Approach (AIA): Given step size h and highest frequency f max find b(h, f max ) ( system specific integrator) that minimizes expectation of the modified energy error = H [4] h (Ψ h,l(θ, p)) H [4] h (θ, p), i. e. 0 E( ) ρ(h, b) minimal
10 Adaptive integrators for optimal conservation of modified energy (( 2 + h2 β)b h + ( 2 + h2 α)c h )(B h + C h ) ρ(h, b) = A 2 h A h = h4 b( 2b) 4 h2 2 + B h = h3 ( 2b) + h 4 C h = h5 b 2 ( 2b) + h 3 b( b) h 4 Then: b (h, f max ) = arg min b B max ρ(h, b) 0<h< h where h (reduced step size) is a function of f max
11 AIA in action b medess/t AIA 0.2 BCSS VV h D = h VV - velocity Verlet, b = 0.25 BCSS - a fixed parameter integrator derived for MMHMC using ideas of Blanes et. al (204) medess/t D =000 AIA VV BCSS HMCVV h Efficiency in sampling a multivarate Gaussian distribution S. Blanes, F. Casas, J.M. Sanz-Serna (204), Numerical integrators for the Hybrid Monte Carlo method, SIAM J. Sci. Comput., 36(4), A556 A580
12 AIA in action b medess/t AIA 0.2 BCSS VV h D = h VV - velocity Verlet, b = 0.25 BCSS - a fixed parameter integrator derived for MMHMC using ideas of Blanes et. al (204) medess/t D = 000 AIA VV BCSS HMCVV h Efficiency in sampling a multivarate Gaussian distribution S. Blanes, F. Casas, J.M. Sanz-Serna (204), Numerical integrators for the Hybrid Monte Carlo method, SIAM J. Sci. Comput., 36(4), A556 A580
13 Benchmark Models Banana-shaped distribution D = 2 Bayesian Logistic Regression Dataset # of param (D) # of obs (K) german sonar musk secom Stochastic Volatility p(x y, θ) p(θ y, x) Simulated data d = 2000 d = 5000 d = 0000 Lichman, M. (203), UCI Machine Learning Repository [ Irvine, CA: University of California, School of Information and Computer Science
14 Testing Methods MMHMC HMC RMHMC Criteria Space exploration Sampling efficiency Convergence Metrics AR acceptance rate ESS* - time normalized effective sample size EF efficiency factor (relative ESS* w.r.t HMC) ˆR potential scale reduction factor Results averaged over 0 runs Choice of simulation parameters each method tuned for the best performance M. Girolami, B. Calderhead (20), Riemann Manifold Langevin and Hamiltonian Monte Carlo Methods, Journal of the Royal Statistical Society: Series B, 73(2):23 24 SP. Brooks, A. Gelman (998), General methods for monitoring convergence of iterative simu- lations. Journal of Computational and Graphical Statistics, 7,
15 Banana distribution 5 sampling paths RWMH HMC MMHMC RMHMC
16 Banana distribution 5000 samples RWMH HMC MMHMC RMHMC
17 Bayesian Logistic Regression (BLR) EF german (D = 25) HMC MMHMC AR sonar (D = 6) AR musk (D = 67) secom (D = 444) 6 AR AR EF miness medess maxess miness medess maxess
18 Stochastic volatility (SV): d= AR θ x HMC RMHMC MMHMC EF β σ φ miness medess maxess
19 SV: d= AR θ x HMC MMHMC EF β σ φ miness medess maxess
20 SV: d= AR θ x HMC MMHMC EF β σ φ miness medess maxess
21 SV Convergence ˆR β ˆR σ ˆR φ.3.2 d=2000 d=5000 d=0000 HMC RMHMC MMHMC MC iterations
22 Summary MMHMC vs HMC MMHMC demonstrates higher AR, bigger ESS* and faster convergence MMHMC vs RMHMC SV model: MMHMC and RMHMC demonstrate comparable sampling performance with slight dominance of MMHMC BLR model: RMHMC does not improve HMC for considered dimensions, whereas MMHMC outperforms HMC up to 8 for D 25. In contrast to RMHMC, MMHMC does not require matrix inversion (computationally less expensive) relies on separable Hamiltonians - allows for use of new, more efficient numerical integrators efficient for high dimensions problems
23 Guggenheim, Bilbao Modeling & Simulation in Life & Materials Sciences: MSLMS BCAM
ADAPTIVE TWO-STAGE INTEGRATORS FOR SAMPLING ALGORITHMS BASED ON HAMILTONIAN DYNAMICS
ADAPTIVE TWO-STAGE INTEGRATORS FOR SAMPLING ALGORITHMS BASED ON HAMILTONIAN DYNAMICS E. Akhmatskaya a,c, M. Fernández-Pendás a, T. Radivojević a, J. M. Sanz-Serna b a Basque Center for Applied Mathematics
More informationMonte Carlo in Bayesian Statistics
Monte Carlo in Bayesian Statistics Matthew Thomas SAMBa - University of Bath m.l.thomas@bath.ac.uk December 4, 2014 Matthew Thomas (SAMBa) Monte Carlo in Bayesian Statistics December 4, 2014 1 / 16 Overview
More informationGSHMC: An efficient Markov chain Monte Carlo sampling method. Sebastian Reich in collaboration with Elena Akhmatskaya (Fujitsu Laboratories Europe)
GSHMC: An efficient Markov chain Monte Carlo sampling method Sebastian Reich in collaboration with Elena Akhmatskaya (Fujitsu Laboratories Europe) 1. Motivation In the first lecture, we started from a
More informationMathematical Modeling of Chemical Reactions at Basque Center for Applied Mathematics. Simone Rusconi. December 11, 2015
Mathematical Modeling of Chemical Reactions at Basque Center for Applied Mathematics Simone Rusconi December 11, 2015 Basque Center for Applied Mathematics (BCAM) BCAM - Basque Center for Applied Mathematics,
More informationEnabling constant pressure hybrid Monte Carlo simulations using the GROMACS molecular simulation package
Enabling constant pressure hybrid Monte Carlo simulations using the GROMACS molecular simulation package Mario Fernández Pendás MSBMS Group Supervised by Bruno Escribano and Elena Akhmatskaya BCAM 18 October
More informationAfternoon Meeting on Bayesian Computation 2018 University of Reading
Gabriele Abbati 1, Alessra Tosi 2, Seth Flaxman 3, Michael A Osborne 1 1 University of Oxford, 2 Mind Foundry Ltd, 3 Imperial College London Afternoon Meeting on Bayesian Computation 2018 University of
More informationExponential Integration for Hamiltonian Monte Carlo
Wei-Lun Chao WEILUNC@USC.EDU Justin Solomon 2 JUSTIN.SOLOMON@STANFORD.EDU Dominik L. Michels 2 MICHELS@CS.STANFORD.EDU Fei Sha FEISHA@USC.EDU Department of Computer Science, University of Southern California,
More informationDynamic modelling of morphology development in multiphase latex particle Elena Akhmatskaya (BCAM) and Jose M. Asua (POLYMAT) June
Dynamic modelling of morphology development in multiphase latex particle Elena Akhmatskaya (BCAM) and Jose M. Asua (POLYMAT) June 7 2012 Publications (background and output) J.M. Asua, E. Akhmatskaya Dynamic
More informationarxiv: v4 [stat.co] 4 May 2016
Hamiltonian Monte Carlo Acceleration using Surrogate Functions with Random Bases arxiv:56.5555v4 [stat.co] 4 May 26 Cheng Zhang Department of Mathematics University of California, Irvine Irvine, CA 92697
More information1 Geometry of high dimensional probability distributions
Hamiltonian Monte Carlo October 20, 2018 Debdeep Pati References: Neal, Radford M. MCMC using Hamiltonian dynamics. Handbook of Markov Chain Monte Carlo 2.11 (2011): 2. Betancourt, Michael. A conceptual
More informationRiemann Manifold Methods in Bayesian Statistics
Ricardo Ehlers ehlers@icmc.usp.br Applied Maths and Stats University of São Paulo, Brazil Working Group in Statistical Learning University College Dublin September 2015 Bayesian inference is based on Bayes
More informationManifold Monte Carlo Methods
Manifold Monte Carlo Methods Mark Girolami Department of Statistical Science University College London Joint work with Ben Calderhead Research Section Ordinary Meeting The Royal Statistical Society October
More informationApproximate Slice Sampling for Bayesian Posterior Inference
Approximate Slice Sampling for Bayesian Posterior Inference Anonymous Author 1 Anonymous Author 2 Anonymous Author 3 Unknown Institution 1 Unknown Institution 2 Unknown Institution 3 Abstract In this paper,
More informationSlice Sampling with Adaptive Multivariate Steps: The Shrinking-Rank Method
Slice Sampling with Adaptive Multivariate Steps: The Shrinking-Rank Method Madeleine B. Thompson Radford M. Neal Abstract The shrinking rank method is a variation of slice sampling that is efficient at
More informationFinite Element Model Updating Using the Separable Shadow Hybrid Monte Carlo Technique
Finite Element Model Updating Using the Separable Shadow Hybrid Monte Carlo Technique I. Boulkaibet a, L. Mthembu a, T. Marwala a, M. I. Friswell b, S. Adhikari b a The Centre For Intelligent System Modelling
More informationReducing The Computational Cost of Bayesian Indoor Positioning Systems
Reducing The Computational Cost of Bayesian Indoor Positioning Systems Konstantinos Kleisouris, Richard P. Martin Computer Science Department Rutgers University WINLAB Research Review May 15 th, 2006 Motivation
More informationarxiv: v1 [stat.ml] 23 Jan 2019
arxiv:1901.08045v1 [stat.ml] 3 Jan 019 Abstract We consider the problem of sampling from posterior distributions for Bayesian models where some parameters are restricted to be orthogonal matrices. Such
More informationLagrangian Dynamical Monte Carlo
Lagrangian Dynamical Monte Carlo Shiwei Lan, Vassilios Stathopoulos, Babak Shahbaba, Mark Girolami November, arxiv:.3759v [stat.co] Nov Abstract Hamiltonian Monte Carlo (HMC) improves the computational
More informationApproximate Slice Sampling for Bayesian Posterior Inference
Approximate Slice Sampling for Bayesian Posterior Inference Christopher DuBois GraphLab, Inc. Anoop Korattikara Dept. of Computer Science UC Irvine Max Welling Informatics Institute University of Amsterdam
More informationIntroduction to Hamiltonian Monte Carlo Method
Introduction to Hamiltonian Monte Carlo Method Mingwei Tang Department of Statistics University of Washington mingwt@uw.edu November 14, 2017 1 Hamiltonian System Notation: q R d : position vector, p R
More informationarxiv: v1 [stat.co] 17 Sep 2013
Spherical Hamiltonian Monte Carlo for Constrained Target Distributions Shiwei Lan, Bo Zhou, Babak Shahbaba September 18, 013 arxiv:1309.489v1 [stat.co] 17 Sep 013 Abstract We propose a new Markov Chain
More information(5) Multi-parameter models - Gibbs sampling. ST440/540: Applied Bayesian Analysis
Summarizing a posterior Given the data and prior the posterior is determined Summarizing the posterior gives parameter estimates, intervals, and hypothesis tests Most of these computations are integrals
More informationBayesian Methods for Machine Learning
Bayesian Methods for Machine Learning CS 584: Big Data Analytics Material adapted from Radford Neal s tutorial (http://ftp.cs.utoronto.ca/pub/radford/bayes-tut.pdf), Zoubin Ghahramni (http://hunch.net/~coms-4771/zoubin_ghahramani_bayesian_learning.pdf),
More informationBayesian Deep Learning
Bayesian Deep Learning Mohammad Emtiyaz Khan AIP (RIKEN), Tokyo http://emtiyaz.github.io emtiyaz.khan@riken.jp June 06, 2018 Mohammad Emtiyaz Khan 2018 1 What will you learn? Why is Bayesian inference
More informationarxiv: v1 [stat.me] 6 Apr 2013
Generalizing the No-U-Turn Sampler to Riemannian Manifolds Michael Betancourt Applied Statistics Center, Columbia University, New York, NY 127, USA Hamiltonian Monte Carlo provides efficient Markov transitions
More informationGradient-based Monte Carlo sampling methods
Gradient-based Monte Carlo sampling methods Johannes von Lindheim 31. May 016 Abstract Notes for a 90-minute presentation on gradient-based Monte Carlo sampling methods for the Uncertainty Quantification
More informationTutorial on Probabilistic Programming with PyMC3
185.A83 Machine Learning for Health Informatics 2017S, VU, 2.0 h, 3.0 ECTS Tutorial 02-04.04.2017 Tutorial on Probabilistic Programming with PyMC3 florian.endel@tuwien.ac.at http://hci-kdd.org/machine-learning-for-health-informatics-course
More informationBayesian Sampling Using Stochastic Gradient Thermostats
Bayesian Sampling Using Stochastic Gradient Thermostats Nan Ding Google Inc. dingnan@google.com Youhan Fang Purdue University yfang@cs.purdue.edu Ryan Babbush Google Inc. babbush@google.com Changyou Chen
More informationQuasi-Newton Methods for Markov Chain Monte Carlo
Quasi-Newton Methods for Markov Chain Monte Carlo Yichuan Zhang and Charles Sutton School of Informatics University of Edinburgh Y.Zhang-60@sms.ed.ac.uk, csutton@inf.ed.ac.uk Abstract The performance of
More informationCPSC 540: Machine Learning
CPSC 540: Machine Learning MCMC and Non-Parametric Bayes Mark Schmidt University of British Columbia Winter 2016 Admin I went through project proposals: Some of you got a message on Piazza. No news is
More informationNew Hybrid Monte Carlo Methods for Efficient Sampling: from Physics to Biology and Statistics
TECHNICAL MATERIAL New Hybrid Monte Carlo Methods for Efficient Sampling: from Physics to Biology and Statistics Elena AKHMATSKAYA 1-3* and Sebastian REICH 4 1 Fujitsu Laboratories of Europe Ltd, Hayes
More informationApproximate Bayesian Computation: a simulation based approach to inference
Approximate Bayesian Computation: a simulation based approach to inference Richard Wilkinson Simon Tavaré 2 Department of Probability and Statistics University of Sheffield 2 Department of Applied Mathematics
More informationLearning the hyper-parameters. Luca Martino
Learning the hyper-parameters Luca Martino 2017 2017 1 / 28 Parameters and hyper-parameters 1. All the described methods depend on some choice of hyper-parameters... 2. For instance, do you recall λ (bandwidth
More informationOn Markov chain Monte Carlo methods for tall data
On Markov chain Monte Carlo methods for tall data Remi Bardenet, Arnaud Doucet, Chris Holmes Paper review by: David Carlson October 29, 2016 Introduction Many data sets in machine learning and computational
More informationRiemann manifold Langevin and Hamiltonian Monte Carlo methods
J. R. Statist. Soc. B (2011) 73, Part 2, pp. 123 214 Riemann manifold Langevin and Hamiltonian Monte Carlo methods Mark Girolami and Ben Calderhead University College London, UK [Read before The Royal
More informationStatistical Machine Learning Lecture 8: Markov Chain Monte Carlo Sampling
1 / 27 Statistical Machine Learning Lecture 8: Markov Chain Monte Carlo Sampling Melih Kandemir Özyeğin University, İstanbul, Turkey 2 / 27 Monte Carlo Integration The big question : Evaluate E p(z) [f(z)]
More informationPolynomial Filtered Hybrid Monte Carlo
Polynomial Filtered Hybrid Monte Carlo Waseem Kamleh and Michael J. Peardon CSSM & U NIVERSITY OF A DELAIDE QCDNA VII, July 4th 6th, 2012 Introduction Generating large, light quark dynamical gauge field
More informationEstimating Accuracy in Classical Molecular Simulation
Estimating Accuracy in Classical Molecular Simulation University of Illinois Urbana-Champaign Department of Computer Science Institute for Mathematics and its Applications July 2007 Acknowledgements Ben
More informationLecture 8: Bayesian Estimation of Parameters in State Space Models
in State Space Models March 30, 2016 Contents 1 Bayesian estimation of parameters in state space models 2 Computational methods for parameter estimation 3 Practical parameter estimation in state space
More informationHYBRID DETERMINISTIC-STOCHASTIC GRADIENT LANGEVIN DYNAMICS FOR BAYESIAN LEARNING
COMMUNICATIONS IN INFORMATION AND SYSTEMS c 01 International Press Vol. 1, No. 3, pp. 1-3, 01 003 HYBRID DETERMINISTIC-STOCHASTIC GRADIENT LANGEVIN DYNAMICS FOR BAYESIAN LEARNING QI HE AND JACK XIN Abstract.
More informationMIT /30 Gelman, Carpenter, Hoffman, Guo, Goodrich, Lee,... Stan for Bayesian data analysis
MIT 1985 1/30 Stan: a program for Bayesian data analysis with complex models Andrew Gelman, Bob Carpenter, and Matt Hoffman, Jiqiang Guo, Ben Goodrich, and Daniel Lee Department of Statistics, Columbia
More informationProbabilistic Graphical Models Lecture 17: Markov chain Monte Carlo
Probabilistic Graphical Models Lecture 17: Markov chain Monte Carlo Andrew Gordon Wilson www.cs.cmu.edu/~andrewgw Carnegie Mellon University March 18, 2015 1 / 45 Resources and Attribution Image credits,
More informationNotes on pseudo-marginal methods, variational Bayes and ABC
Notes on pseudo-marginal methods, variational Bayes and ABC Christian Andersson Naesseth October 3, 2016 The Pseudo-Marginal Framework Assume we are interested in sampling from the posterior distribution
More informationLecture 2: From Linear Regression to Kalman Filter and Beyond
Lecture 2: From Linear Regression to Kalman Filter and Beyond Department of Biomedical Engineering and Computational Science Aalto University January 26, 2012 Contents 1 Batch and Recursive Estimation
More informationHamiltonian Monte Carlo
Hamiltonian Monte Carlo within Stan Daniel Lee Columbia University, Statistics Department bearlee@alum.mit.edu BayesComp mc-stan.org Why MCMC? Have data. Have a rich statistical model. No analytic solution.
More informationProbabilistic Machine Learning
Probabilistic Machine Learning Bayesian Nets, MCMC, and more Marek Petrik 4/18/2017 Based on: P. Murphy, K. (2012). Machine Learning: A Probabilistic Perspective. Chapter 10. Conditional Independence Independent
More informationReminder of some Markov Chain properties:
Reminder of some Markov Chain properties: 1. a transition from one state to another occurs probabilistically 2. only state that matters is where you currently are (i.e. given present, future is independent
More informationBayesian Sampling Using Stochastic Gradient Thermostats
Bayesian Sampling Using Stochastic Gradient Thermostats Nan Ding Google Inc. dingnan@google.com Youhan Fang Purdue University yfang@cs.purdue.edu Ryan Babbush Google Inc. babbush@google.com Changyou Chen
More informationDownloaded 07/28/14 to Redistribution subject to SIAM license or copyright; see
SIAM J. SCI. COMPUT. Vol. 36, No. 4, pp. A1556 A1580 c 2014 Society for Industrial and Applied Mathematics NUMERICAL INTEGRATORS FOR THE HYBRID MONTE CARLO METHOD SERGIO BLANES, FERNANDO CASAS, AND J.
More informationIntroduction to Machine Learning
Introduction to Machine Learning Brown University CSCI 1950-F, Spring 2012 Prof. Erik Sudderth Lecture 25: Markov Chain Monte Carlo (MCMC) Course Review and Advanced Topics Many figures courtesy Kevin
More informationProbabilistic Graphical Models
10-708 Probabilistic Graphical Models Homework 3 (v1.1.0) Due Apr 14, 7:00 PM Rules: 1. Homework is due on the due date at 7:00 PM. The homework should be submitted via Gradescope. Solution to each problem
More informationGeometry & Dynamics for Markov Chain Monte Carlo
arxiv:1705.02891v1 [stat.co] 8 May 2017 Annu. Rev. Stat. Appl. 2018. 5:1 26 This article s doi: 10.1146/please add article doi)) Copyright c 2018 by Annual Reviews. All rights reserved Geometry & Dynamics
More informationBoltzmann-Gibbs Preserving Langevin Integrators
Boltzmann-Gibbs Preserving Langevin Integrators Nawaf Bou-Rabee Applied and Comp. Math., Caltech INI Workshop on Markov-Chain Monte-Carlo Methods March 28, 2008 4000 atom cluster simulation Governing Equations
More informationDiscontinuous Hamiltonian Monte Carlo for discrete parameters and discontinuous likelihoods
Discontinuous Hamiltonian Monte Carlo for discrete parameters and discontinuous likelihoods arxiv:1705.08510v3 [stat.co] 7 Sep 2018 Akihiko Nishimura Department of Biomathematics, University of California
More informationHamiltonian Monte Carlo Without Detailed Balance
Jascha Sohl-Dickstein Stanford University, Palo Alto. Khan Academy, Mountain View Mayur Mudigonda Redwood Institute for Theoretical Neuroscience, University of California at Berkeley Michael R. DeWeese
More informationBayesian Methods in Positioning Applications
Bayesian Methods in Positioning Applications Vedran Dizdarević v.dizdarevic@tugraz.at Graz University of Technology, Austria 24. May 2006 Bayesian Methods in Positioning Applications p.1/21 Outline Problem
More informationCalibration of Stochastic Volatility Models using Particle Markov Chain Monte Carlo Methods
Calibration of Stochastic Volatility Models using Particle Markov Chain Monte Carlo Methods Jonas Hallgren 1 1 Department of Mathematics KTH Royal Institute of Technology Stockholm, Sweden BFS 2012 June
More information19 : Slice Sampling and HMC
10-708: Probabilistic Graphical Models 10-708, Spring 2018 19 : Slice Sampling and HMC Lecturer: Kayhan Batmanghelich Scribes: Boxiang Lyu 1 MCMC (Auxiliary Variables Methods) In inference, we are often
More informationStein Variational Gradient Descent: A General Purpose Bayesian Inference Algorithm
Stein Variational Gradient Descent: A General Purpose Bayesian Inference Algorithm Qiang Liu and Dilin Wang NIPS 2016 Discussion by Yunchen Pu March 17, 2017 March 17, 2017 1 / 8 Introduction Let x R d
More informationIntroduction to Stochastic Gradient Markov Chain Monte Carlo Methods
Introduction to Stochastic Gradient Markov Chain Monte Carlo Methods Changyou Chen Department of Electrical and Computer Engineering, Duke University cc448@duke.edu Duke-Tsinghua Machine Learning Summer
More informationIntroduction to Markov Chain Monte Carlo & Gibbs Sampling
Introduction to Markov Chain Monte Carlo & Gibbs Sampling Prof. Nicholas Zabaras Sibley School of Mechanical and Aerospace Engineering 101 Frank H. T. Rhodes Hall Ithaca, NY 14853-3801 Email: zabaras@cornell.edu
More informationMarkov Chain Monte Carlo (MCMC) and Model Evaluation. August 15, 2017
Markov Chain Monte Carlo (MCMC) and Model Evaluation August 15, 2017 Frequentist Linking Frequentist and Bayesian Statistics How can we estimate model parameters and what does it imply? Want to find the
More informationMCMC for big data. Geir Storvik. BigInsight lunch - May Geir Storvik MCMC for big data BigInsight lunch - May / 17
MCMC for big data Geir Storvik BigInsight lunch - May 2 2018 Geir Storvik MCMC for big data BigInsight lunch - May 2 2018 1 / 17 Outline Why ordinary MCMC is not scalable Different approaches for making
More informationBayesian Inference and MCMC
Bayesian Inference and MCMC Aryan Arbabi Partly based on MCMC slides from CSC412 Fall 2018 1 / 18 Bayesian Inference - Motivation Consider we have a data set D = {x 1,..., x n }. E.g each x i can be the
More informationKernel Adaptive Metropolis-Hastings
Kernel Adaptive Metropolis-Hastings Arthur Gretton,?? Gatsby Unit, CSML, University College London NIPS, December 2015 Arthur Gretton (Gatsby Unit, UCL) Kernel Adaptive Metropolis-Hastings 12/12/2015 1
More informationMarkov Chain Monte Carlo methods
Markov Chain Monte Carlo methods Tomas McKelvey and Lennart Svensson Signal Processing Group Department of Signals and Systems Chalmers University of Technology, Sweden November 26, 2012 Today s learning
More informationPattern Recognition and Machine Learning. Bishop Chapter 11: Sampling Methods
Pattern Recognition and Machine Learning Chapter 11: Sampling Methods Elise Arnaud Jakob Verbeek May 22, 2008 Outline of the chapter 11.1 Basic Sampling Algorithms 11.2 Markov Chain Monte Carlo 11.3 Gibbs
More informationPostdoctoral Fellow, BCAM Basque Center for Applied Mathematics Research Group Modelling and Simulation in Life and Materials Sciences
Tijana Radivojević Curriculum Vitae Alameda Mazarredo 14, 48009 Bilbao, Spain (+34) 946 567 842 tradivojevic@bcamath.org bcamath.org/en/people/radivojevic Current position since November 2016 Postdoctoral
More informationBayesian Inference by Density Ratio Estimation
Bayesian Inference by Density Ratio Estimation Michael Gutmann https://sites.google.com/site/michaelgutmann Institute for Adaptive and Neural Computation School of Informatics, University of Edinburgh
More informationReinforcement Learning
Reinforcement Learning Policy gradients Daniel Hennes 26.06.2017 University Stuttgart - IPVS - Machine Learning & Robotics 1 Policy based reinforcement learning So far we approximated the action value
More information17 : Markov Chain Monte Carlo
10-708: Probabilistic Graphical Models, Spring 2015 17 : Markov Chain Monte Carlo Lecturer: Eric P. Xing Scribes: Heran Lin, Bin Deng, Yun Huang 1 Review of Monte Carlo Methods 1.1 Overview Monte Carlo
More informationBridging the Gap between Stochastic Gradient MCMC and Stochastic Optimization
Bridging the Gap between Stochastic Gradient MCMC and Stochastic Optimization Changyou Chen, David Carlson, Zhe Gan, Chunyuan Li, Lawrence Carin May 2, 2016 1 Changyou Chen Bridging the Gap between Stochastic
More informationarxiv: v5 [stat.ml] 10 Jan 2018
Towards Unifying Hamiltonian Monte Carlo and Slice Sampling Yizhe Zhang, Xiangyu Wang 2, Changyou Chen, Ricardo Henao, Kai Fan 2 and Lawrence Carin arxiv:62.78v5 [stat.ml] Jan 28 Department of Electrical
More informationPart 1: Expectation Propagation
Chalmers Machine Learning Summer School Approximate message passing and biomedicine Part 1: Expectation Propagation Tom Heskes Machine Learning Group, Institute for Computing and Information Sciences Radboud
More informationMachine Learning CSE546 Carlos Guestrin University of Washington. September 30, 2013
Bayesian Methods Machine Learning CSE546 Carlos Guestrin University of Washington September 30, 2013 1 What about prior n Billionaire says: Wait, I know that the thumbtack is close to 50-50. What can you
More informationPDF hosted at the Radboud Repository of the Radboud University Nijmegen
PDF hosted at the Radboud Repository of the Radboud University Nijmegen The following full text is a preprint version which may differ from the publisher's version. For additional information about this
More informationComputer Vision Group Prof. Daniel Cremers. 10a. Markov Chain Monte Carlo
Group Prof. Daniel Cremers 10a. Markov Chain Monte Carlo Markov Chain Monte Carlo In high-dimensional spaces, rejection sampling and importance sampling are very inefficient An alternative is Markov Chain
More informationEfficient adaptive covariate modelling for extremes
Efficient adaptive covariate modelling for extremes Slides at www.lancs.ac.uk/ jonathan Matthew Jones, David Randell, Emma Ross, Elena Zanini, Philip Jonathan Copyright of Shell December 218 1 / 23 Structural
More information17 : Optimization and Monte Carlo Methods
10-708: Probabilistic Graphical Models Spring 2017 17 : Optimization and Monte Carlo Methods Lecturer: Avinava Dubey Scribes: Neil Spencer, YJ Choe 1 Recap 1.1 Monte Carlo Monte Carlo methods such as rejection
More informationThe University of Auckland Applied Mathematics Bayesian Methods for Inverse Problems : why and how Colin Fox Tiangang Cui, Mike O Sullivan (Auckland),
The University of Auckland Applied Mathematics Bayesian Methods for Inverse Problems : why and how Colin Fox Tiangang Cui, Mike O Sullivan (Auckland), Geoff Nicholls (Statistics, Oxford) fox@math.auckland.ac.nz
More informationParametric Models. Dr. Shuang LIANG. School of Software Engineering TongJi University Fall, 2012
Parametric Models Dr. Shuang LIANG School of Software Engineering TongJi University Fall, 2012 Today s Topics Maximum Likelihood Estimation Bayesian Density Estimation Today s Topics Maximum Likelihood
More informationKernel Sequential Monte Carlo
Kernel Sequential Monte Carlo Ingmar Schuster (Paris Dauphine) Heiko Strathmann (University College London) Brooks Paige (Oxford) Dino Sejdinovic (Oxford) * equal contribution April 25, 2016 1 / 37 Section
More informationBayesian Inference: Principles and Practice 3. Sparse Bayesian Models and the Relevance Vector Machine
Bayesian Inference: Principles and Practice 3. Sparse Bayesian Models and the Relevance Vector Machine Mike Tipping Gaussian prior Marginal prior: single α Independent α Cambridge, UK Lecture 3: Overview
More informationMachine Learning CSE546 Carlos Guestrin University of Washington. September 30, What about continuous variables?
Linear Regression Machine Learning CSE546 Carlos Guestrin University of Washington September 30, 2014 1 What about continuous variables? n Billionaire says: If I am measuring a continuous variable, what
More informationCombining stochastic and deterministic approaches within high efficiency molecular simulations
Cent. Eur. J. Math. 11(4) 2013 787-799 DOI: 10.2478/s11533-012-0164-x Central European Journal of Mathematics Combining stochastic and deterministic approaches within high efficiency molecular simulations
More informationAdaptive HMC via the Infinite Exponential Family
Adaptive HMC via the Infinite Exponential Family Arthur Gretton Gatsby Unit, CSML, University College London RegML, 2017 Arthur Gretton (Gatsby Unit, UCL) Adaptive HMC via the Infinite Exponential Family
More informationECS171: Machine Learning
ECS171: Machine Learning Lecture 4: Optimization (LFD 3.3, SGD) Cho-Jui Hsieh UC Davis Jan 22, 2018 Gradient descent Optimization Goal: find the minimizer of a function min f (w) w For now we assume f
More informationA new Hierarchical Bayes approach to ensemble-variational data assimilation
A new Hierarchical Bayes approach to ensemble-variational data assimilation Michael Tsyrulnikov and Alexander Rakitko HydroMetCenter of Russia College Park, 20 Oct 2014 Michael Tsyrulnikov and Alexander
More informationNatural Evolution Strategies for Direct Search
Tobias Glasmachers Natural Evolution Strategies for Direct Search 1 Natural Evolution Strategies for Direct Search PGMO-COPI 2014 Recent Advances on Continuous Randomized black-box optimization Thursday
More informationBagging During Markov Chain Monte Carlo for Smoother Predictions
Bagging During Markov Chain Monte Carlo for Smoother Predictions Herbert K. H. Lee University of California, Santa Cruz Abstract: Making good predictions from noisy data is a challenging problem. Methods
More informationarxiv: v1 [stat.ml] 28 Jan 2014
Jan-Willem van de Meent Brooks Paige Frank Wood Columbia University University of Oxford University of Oxford arxiv:1401.7145v1 stat.ml] 28 Jan 2014 Abstract In this paper we demonstrate that tempering
More informationCS260: Machine Learning Algorithms
CS260: Machine Learning Algorithms Lecture 4: Stochastic Gradient Descent Cho-Jui Hsieh UCLA Jan 16, 2019 Large-scale Problems Machine learning: usually minimizing the training loss min w { 1 N min w {
More informationRiemannian Stein Variational Gradient Descent for Bayesian Inference
Riemannian Stein Variational Gradient Descent for Bayesian Inference Chang Liu, Jun Zhu 1 Dept. of Comp. Sci. & Tech., TNList Lab; Center for Bio-Inspired Computing Research State Key Lab for Intell. Tech.
More informationPerformance Comparison of K-Means and Expectation Maximization with Gaussian Mixture Models for Clustering EE6540 Final Project
Performance Comparison of K-Means and Expectation Maximization with Gaussian Mixture Models for Clustering EE6540 Final Project Devin Cornell & Sushruth Sastry May 2015 1 Abstract In this article, we explore
More informationSequential Monte Carlo Methods
University of Pennsylvania Bradley Visitor Lectures October 23, 2017 Introduction Unfortunately, standard MCMC can be inaccurate, especially in medium and large-scale DSGE models: disentangling importance
More informationCombining stochastic and deterministic approaches within high efficiency molecular simulations
Cent. Eur. J. Math. 1-16 Author version Central European Journal of Mathematics Combining stochastic and deterministic approaches within high efficiency molecular simulations Research Bruno Escribano 1,
More informationExercises Tutorial at ICASSP 2016 Learning Nonlinear Dynamical Models Using Particle Filters
Exercises Tutorial at ICASSP 216 Learning Nonlinear Dynamical Models Using Particle Filters Andreas Svensson, Johan Dahlin and Thomas B. Schön March 18, 216 Good luck! 1 [Bootstrap particle filter for
More informationKernel adaptive Sequential Monte Carlo
Kernel adaptive Sequential Monte Carlo Ingmar Schuster (Paris Dauphine) Heiko Strathmann (University College London) Brooks Paige (Oxford) Dino Sejdinovic (Oxford) December 7, 2015 1 / 36 Section 1 Outline
More informationVariational inference
Simon Leglaive Télécom ParisTech, CNRS LTCI, Université Paris Saclay November 18, 2016, Télécom ParisTech, Paris, France. Outline Introduction Probabilistic model Problem Log-likelihood decomposition EM
More informationParsimonious Adaptive Rejection Sampling
Parsimonious Adaptive Rejection Sampling Luca Martino Image Processing Laboratory, Universitat de València (Spain). Abstract Monte Carlo (MC) methods have become very popular in signal processing during
More information