Use of probability gradients in hybrid MCMC and a new convergence test

Similar documents
Probing the covariance matrix

17 : Markov Chain Monte Carlo

Pattern Recognition and Machine Learning. Bishop Chapter 11: Sampling Methods

Bayesian Inference and MCMC

Parameter Estimation. William H. Jefferys University of Texas at Austin Parameter Estimation 7/26/05 1

Probing the covariance matrix

Markov Chain Monte Carlo (MCMC)

Bayesian Methods for Machine Learning

Markov Chain Monte Carlo methods

The Bias-Variance dilemma of the Monte Carlo. method. Technion - Israel Institute of Technology, Technion City, Haifa 32000, Israel

April 20th, Advanced Topics in Machine Learning California Institute of Technology. Markov Chain Monte Carlo for Machine Learning

19 : Slice Sampling and HMC

MCMC Sampling for Bayesian Inference using L1-type Priors

CS242: Probabilistic Graphical Models Lecture 7B: Markov Chain Monte Carlo & Gibbs Sampling

Hastings-within-Gibbs Algorithm: Introduction and Application on Hierarchical Model

Introduction to Bayesian methods in inverse problems

Lecture 7 and 8: Markov Chain Monte Carlo

Doing Bayesian Integrals

Human Pose Tracking I: Basics. David Fleet University of Toronto

CPSC 540: Machine Learning

Monte Carlo in Bayesian Statistics

Multimodal Nested Sampling

STA 4273H: Statistical Machine Learning

Approximate inference in Energy-Based Models

Brief introduction to Markov Chain Monte Carlo

Probabilistic Graphical Models Lecture 17: Markov chain Monte Carlo

Sampling Methods (11/30/04)

MCMC and Gibbs Sampling. Kayhan Batmanghelich

16 : Markov Chain Monte Carlo (MCMC)

Learning Gaussian Process Models from Uncertain Data

Nonparametric Bayesian Methods (Gaussian Processes)

Integrated Non-Factorized Variational Inference

Markov chain Monte Carlo methods in atmospheric remote sensing

Physics 403. Segev BenZvi. Numerical Methods, Maximum Likelihood, and Least Squares. Department of Physics and Astronomy University of Rochester

Forward and inverse wave problems:quantum billiards and brain imaging p.1

An introduction to Bayesian statistics and model calibration and a host of related topics

Introduction to Machine Learning CMU-10701

Role of uncertainty quantification in validating physics simulation codes

Probabilistic Graphical Models

Riemann Manifold Methods in Bayesian Statistics

Lecture 6: Markov Chain Monte Carlo

Markov Chain Monte Carlo

Connections between score matching, contrastive divergence, and pseudolikelihood for continuous-valued variables. Revised submission to IEEE TNN

A Search and Jump Algorithm for Markov Chain Monte Carlo Sampling. Christopher Jennison. Adriana Ibrahim. Seminar at University of Kuwait

Markov Chain Monte Carlo (MCMC)

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods

Probability and Information Theory. Sargur N. Srihari

Lecture 8: Bayesian Estimation of Parameters in State Space Models

STA 4273H: Sta-s-cal Machine Learning

Density Estimation. Seungjin Choi

Answers and expectations

Gaussian Processes for Regression. Carl Edward Rasmussen. Department of Computer Science. Toronto, ONT, M5S 1A4, Canada.

Pattern Recognition and Machine Learning

Bayesian Methods and Uncertainty Quantification for Nonlinear Inverse Problems

Markov Networks.

Log Gaussian Cox Processes. Chi Group Meeting February 23, 2016

Machine Learning using Bayesian Approaches

an introduction to bayesian inference

Paul Karapanagiotidis ECO4060

Bayesian Phylogenetics:

Bagging During Markov Chain Monte Carlo for Smoother Predictions

CSC 2541: Bayesian Methods for Machine Learning

Lecture : Probabilistic Machine Learning

Bayesian data analysis in practice: Three simple examples

Development of Stochastic Artificial Neural Networks for Hydrological Prediction

A quick introduction to Markov chains and Markov chain Monte Carlo (revised version)

16 : Approximate Inference: Markov Chain Monte Carlo

BAYESIAN METHODS FOR VARIABLE SELECTION WITH APPLICATIONS TO HIGH-DIMENSIONAL DATA

Results: MCMC Dancers, q=10, n=500

Bayesian analysis in nuclear physics

Large Scale Bayesian Inference

Monte Carlo Methods. Leon Gu CSD, CMU

Computational statistics

Stein Variational Gradient Descent: A General Purpose Bayesian Inference Algorithm

Advances and Applications in Perfect Sampling

Machine Learning for Data Science (CS4786) Lecture 24

Inverse Problems in the Bayesian Framework

Finite Element Model Updating Using the Separable Shadow Hybrid Monte Carlo Technique

Nonparametric Drift Estimation for Stochastic Differential Equations

Default Priors and Effcient Posterior Computation in Bayesian

Strong Lens Modeling (II): Statistical Methods

Principles of Bayesian Inference

Bayesian Inference for DSGE Models. Lawrence J. Christiano

Bayesian Inference in Astronomy & Astrophysics A Short Course

IEOR E4570: Machine Learning for OR&FE Spring 2015 c 2015 by Martin Haugh. The EM Algorithm

The Ising model and Markov chain Monte Carlo

Data Analysis I. Dr Martin Hendry, Dept of Physics and Astronomy University of Glasgow, UK. 10 lectures, beginning October 2006

1 Using standard errors when comparing estimated values

Particle Filtering Approaches for Dynamic Stochastic Optimization

Basic Sampling Methods

Computer Intensive Methods in Mathematical Statistics

Bayesian Inference for DSGE Models. Lawrence J. Christiano

Introduction to Bayes

Sequential Monte Carlo and Particle Filtering. Frank Wood Gatsby, November 2007

Lecture 5. G. Cowan Lectures on Statistical Data Analysis Lecture 5 page 1

Online appendix to On the stability of the excess sensitivity of aggregate consumption growth in the US

17 : Optimization and Monte Carlo Methods

Bayesian Estimation of DSGE Models 1 Chapter 3: A Crash Course in Bayesian Inference

Computer Intensive Methods in Mathematical Statistics

Introduction to Probability and Statistics (Continued)

Transcription:

Use of probability gradients in hybrid MCMC and a new convergence test Ken Hanson Methods for Advanced Scientific Simulations Group This presentation available under http://www.lanl.gov/home/kmh/ June 27, 2002 UQWG 1

MCMCexperts Acknowledgements Dave Higdon, Frank Alexander, Julian Besag, Jim Guberantus, John Skilling, Malvin Kalos, Radford Neal General discussions Greg Cunningham, Richard Silver June 27, 2002 UQWG 2

Outline Setting - Bayesian inference for simulation codes numerous continuous parameters expensive function evaluations Adjoint differentiation on basis of code efficiently calculates gradient of computed scalar quantity uses methods of implementation Hybrid Markov Chain Monte Carlo basic algorithm requires gradient of minus-log-probability A method to test convergence of MCMC sequence based on gradient of minus-log-probability June 27, 2002 UQWG 3

Uses of adjoint differentiation Fitting large-scale simulations to data (regression): atmosphere and ocean models, fluid dynamics, hydrodynamic Reconstruction: imaging through diffusive media Hybrid Markov Chain Monte Carlo Uncertainty analysis of simulation code sensitivity of uncertainty variance to each contributing cause Metropolis-Hastings MCMC calculations sensitivity of efficiency (or acceptance fraction) wrt proposal distribution parameters June 27, 2002 UQWG 4

Forward and inverse probability Forward probability Parameter space Experimental observation space Inverse probability Forward probability - determine uncertainties in observables resulting from model parameter uncertainties Inverse probability - infer model parameter uncertainties from uncertainties in observables; inference June 27, 2002 UQWG 5

Maximum likelihood estimation by optimzation Measurements, Y Initial State Ψ(0) Simulation Ψ(t) Measurement System Model Y*(α) -lnp(y Y*) = 1 / 2 χ2 Parameters α Optimizer Find parameters (vector α) that minimize * 2 1 2 ( yi yi ) -ln p(y Y*(α)) = χ = 2 2 σ i Result is maximum likelihood estimate for α also known as minimum-chi-squared or least-squares solution Prior information used to overcome ill-posedness; Bayesian approach June 27, 2002 UQWG 6

Maximum likelihood estimation by optimization Measurements, Y Initial State Ψ(0) Simulation Ψ(t) Measurement System Model Y*(α) -lnp(y Y*) = 1 / 2 χ2 Parameters α Optimizer Find minimum in -ln p(y Y*(α)) = by iteration over parameters α Optimization process is accelerated by using gradient-based algorithms; therefore need gradients of simulation and measurement processes Adjoint differentiation facilitates efficient calculation of gradients, 1 2 i.e. derivative of scalar output ( ) wrt parameters α 2 c 1 c 2 2 June 27, 2002 UQWG 7

Derivative calculation by finite differences Derivative for function defined as limit of ratio of finite differences: df f x1+ x - f x1 = lim ( D ) ( ) dx Dx Æ 0 x1 Dx Wish to estimate derivatives of calculated function for which there is no analytic relation between outputs and inputs Numerical estimation based on finite differences is problematical: difficult to choose perturbation x # function evaluations ~ #variables Estimation based on functionality implied by computer code is more reliable Error in derivative of sin(x)vs. x at x = π/4 June 27, 2002 UQWG 8

Differentiation of sequence of transformations x A y B z C ϕ Data-flow diagram shows sequence of transformations A->B->C that converts data structures x to y to z and to scalar ϕ (forward calculation) Desire derivatives of ϕ wrt all components of x, assuming that ϕ is differentiable Chain rule applies: ϕ y j zk ϕ = x i j, k xi y j zk Two choices for summation order: doing j before k means derivatives follow data flow (forward calculation) doing k before j means derivatives flow in reverse (adjoint) direction June 27, 2002 UQWG 9

Adjoint Differentiation In Code Technique ADICT x ϕ x A y ϕ y B z ϕ z C ϕ For sequence of transformations that converts data structure x to scalar ϕ ϕ Derivatives are efficiently calculated in the reverse (adjoint) direction x Code-based approach: logic of adjoint code is based explicitly on the forward code or on derivatives of the forward algorithm Not based on the theoretical eqs., which forward calc. only approximates Onlyassumptionisthatϕis a differentiable function of x CPU time to compute all derivatives is comparable to forward calculation June 27, 2002 UQWG 10

Level of abstraction of implementation One can choose to differentiate forward calculation at various levels of abstraction model based e.g., differentiate partial differential equations and solve not advised because forward codes only approximates model module or algorithm based differentiate each basic algorithm (Bayes Inference Engine) code based direct interpretation of computer code (FORTRAN, C, etc.) automatic differentiation utilities produce derivative code(s) instruction based reverse the sequence of CPU instructions for any particular calc. June 27, 2002 UQWG 11

Example of algorithm-based approach Bayes Inference Engine (BIE) created at LANL modeling tool for interpreting radiographs BIE programmed by creating data-flow diagram linking transforms, as shown here for 3D reconstruction problem Adjoint differentiation crucial to BIE success June 27, 2002 UQWG 12

3D reconstruction Data Frame 51 out of 100 50 ms, 4000 counts 24 views Reconstruction Variable intensity inside deformable boundary June 27, 2002 UQWG 13

Simulation of light diffusion in tissue light pulse in light out light intensity unscattered multiple scattered time after input pulse IR light photons in broad, retarded peak literally diffuse by multiple scattering from source to detector time is equivalent to distance traveled diffusion equation models these multiply-scattered photons these photons do not follow straight lines June 27, 2002 UQWG 14

Optical tomography invert diffusion process light pulse in 1 3 2 1 4 4 0.7< D <1.4 cm 2 ns -1 (µ a =0.1 cm -1 ) 2 3 for assumed distribution of diffusion coefficients (left) predict time-dependent output at four locations (right) reconstruction problem - determine image on left from data on right June 27, 2002 UQWG 15

Finite-difference calculation Data-flow diagram shows calculation of time-dependent measurements by finite-difference simulation Calculation marches through time steps t new state U n+1 depends only on previous state U n Diffusion, atten. coef. U = light intensity distribution Measurements on periphery June 27, 2002 UQWG 16

Adjoint differentiation in diffusion calculation Adjoint differentiation calculation precisely reverses direction of forward calculation Each forward data structure has an associated derivative Diffusion, atten. coef. U = light intensity distribution whereu n propagates forward, Measurements on periphery ϕ 2 goes backward ( ϕ = 1 χ ) U n Optical tomographic reconstruction determine image of light diffusion characteristics from measurements of how will IR light passes through tissue 2 June 27, 2002 UQWG 17

Reconstruction of simple phantom p = 1.1 p =2 Measurements section is (6.4cm) 2,0.7<D <1.4cm 2 ns -1 (µ abs =0.1cm -1 ) 4 input pulse locations (middle of each side) 4 detector locations; intensity measured every 50 ps for 1 ns Reconstructions on 64 x 64 grid from noisy data (rmsn = 3%) Conjugate-gradient optimization algorithm June 27, 2002 UQWG 18

Reconstruction of Infant s Brain I Original MRI data 4 Reconstruction (init. guess D = 1 cm 2 /ns) 11 cm 0.35 10 10 1.55 0.73 D[cm 2 \ns] 0.1 0.5 0.5 3.63 3.72 11 cm hematoma (left side) and cerebrospinal fluid pocket (upper right) 0 Sources & Detectors (60 iterations ~ 70 min) June 27, 2002 UQWG 19

Automatic differentiation tools Several tools exist for automatically differentiating codes; various capabilities, e.g., forward or reverse (adjoint) differentiation, handling of large codes, etc. FORTRAN 77 (90 under development) ADIFOR (reverse mode) TAMC (reverse mode) TAPENADE (reverse mode) C (C++ under development) ADIC ADOL-C (reverse mode) MATLAB ADMAT Very active area of development June 27, 2002 UQWG 20

MCMC for simulations Simulation Measurements, Y Calc. Meas., Y*(x) -ln p(y Y*) = 1 / 2 χ2 1 2 2 χ ( yi y ) = σ -lnp(x Y) * 2 i 2 i Parameters {x} MCMC - log(likelihood) distribution is result of calculation; function of parameters x Markov Chain Monte Carlo (MCMC) algorithm draws random samples of x from posterior probability p(x Y) Produces plausible set of parameters {x}; therefore model realizations June 27, 2002 UQWG 21

MCMC - problem statement Parameter space of n dimensions represented by vector x Draw a set of samples {x k } from a given arbitrary target probability density function (pdf), q(x) Only requirement typically is that one be able to evaluate Cq(x) foranygivenx,wherec is an unknown constant; that is, q(x) need not be normalized Although focus here is on continuous variables, MCMC applies to discrete variables as well June 27, 2002 UQWG 22

Uses of MCMC Permits evaluation of the expectation values of functions of x, e.g., f(x) = f(x) q(x) dx (1/K) Σ k f(x k ) typical use is to calculate mean x and variance (x - x ) 2 Useful for evaluating integrals, such as the partition function for properly normalizing the pdf Dynamic display of sequences provides visualization of uncertainties in model and range of model variations Automatic marginalization; when considering any subset of parameters of an MCMC sequence, the remaining parameters are marginalized over (integrated out) June 27, 2002 UQWG 23

Metropolis Markov Chain Monte Carlo Generates sequence of random samples from an arbitrary probability density function Metropolis algorithm: draw trial step from symmetric pdf, i.e., t( x) = t(- x) accept or reject trial step simple and generally applicable requires only calculation of target pdf, q(x), for any x x 2 Probability(x 1,x 2 ) accepted step rejected step x 1 June 27, 2002 UQWG 24

Metropolis algorithm Select initial parameter vector x 0 Iterate as follows: at iteration number k (1) create new trial position x* =x k + x, where x is randomly chosen from t( x) (2) calculate ratio r = q(x*)/q(x k ) (3) accept trial position, i.e. set x k+1 = x* if r 1 or with probability r, if r <1 otherwise stay put, x k+1 = x k Only requires computation of q(x) (with arbitrary normalization) Creates Markov chain since x k+1 depends only on x k June 27, 2002 UQWG 25

Gibbs algorithm Vary only one component of x at a time Drawnew valueof x j from conditional pdf q(x j x 1 x 2... x j-1 x j+1... ) Cycle through all components x 2 Probability(x 1,x 2 ) x 1 June 27, 2002 UQWG 26

Hybrid MCMC method Called hybrid method because it alternates Gibbs & Metropolis steps (better called Hamiltonian method?) Associate with each parameter x i amomentump i Define a Hamiltonian (sum of potential and kinetic energy): H = ϕ(x) +Σ p i2 /(2 m i ), where ϕ =-log(q(x)) Objective is to draw samples from new pdf: q'(x, p) µ exp(- H(x, p)) = q(x) exp(-σ p i2 /(2 m i )) Samples {x k } from q'(x, p) represent draws from q(x) because p dependence marginalized out June 27, 2002 UQWG 27

Hybrid algorithm p i k+2 k k+1 x i Typical trajectories: red path - Gibbs sample from momentum distribution green path - trajectory with constant H, follow by Metropolis June 27, 2002 UQWG 28

Hamiltonian algorithm Gibbs step: randomly sample momentum distribution Follow trajectory of constant H using leapfrog algorithm: τ τ pi( t+ ) = pi( t) ϕ 2 2 xi x() t τ τ xi( t+ τ ) = xi( t) + pi( t+ ) m 2 τ τ pi( t+ ) = pi( t+ ) ϕ τ 2 2 xi x( t+ τ ) i where τ is leapfrog time step. Repeat leapfrog a predetermined number of times Metropolis step: accept or reject on basis of H at beginning and end of H trajectory June 27, 2002 UQWG 29

Hybrid algorithm implementation Gibbs step - easy because draws are from uncorrelated Gaussian H trajectories followed by several leapfrog steps permit long jumps in (x, p) space, with little change in H specify total time = T ; number of leapfrog steps = T/τ randomize T to avoid coherent oscillations reverse momenta at end of H trajectory to guarantee that it is symmetric process (condition for Metropolis step) Metropolis step - no rejections if H is unchanged Adjoint differentiation efficiently provides gradient June 27, 2002 UQWG 30

2D isotropic Gaussian distribution Long H trajectories - shows ellipses when σ 1 = σ 2 =1,m 1 = m 2 =1 Randomize length of H trajectories to obtain good sampling of pdf June 27, 2002 UQWG 31

2D correlated Gaussian distribution 2D Gaussian pdf with high correlation (r =0.95) Length of H trajectories randomized June 27, 2002 UQWG 32

MCMC Efficiency Estimate of a quantity from its samples from a pdf q(v) ~v = 1 Σ v N k ForN independent samples drawn from a pdf, variance in estimate: var( v ~ ) = k var( v) N ForNsamples from an MCMC sequence with target pdf q(v) var( ~ var( v) v ) = η N where η is the sampling efficiency Thus, η 1 iterations needed for one statistically independent sample Letv = variance because aim is to estimate variance of target pdf June 27, 2002 UQWG 33

n-d isotropic Gaussian distributions MCMC efficiency versus number dimensions Hamiltonian method: drops little Metropolis method: goes as 0.3/n Hybrid (Hamiltonian) method much more efficient at high dimensions Assumes gradient eval. costs same as function Hybrid Metropolis June 27, 2002 UQWG 34

A new convergence test statistic Variance integral 2 var( xi) = ( xi xi) p( x) dx 1 3 3 = ( xi xi) ϕ( x) p( x) dx+ ( xi xi) p( x) 3 by integration by parts and limits are typically ± and last term is usually zero thus, integrals are equal Form ratio of integrals, computed from samples x k from p(x) R = Σ k k 3 ϕ ( xi xi ) k xi k k 2 3 ( x x ) Σ i i ϕ( x) = log( p( x)) R tendstobelessthan1whenp(x) not adequately sampled, x k 1 3 = x k i June 27, 2002 UQWG 35

2D non-isotropic Gaussian distribution Nonisotropic Gaussian target pdf: σ 1 =4,σ 2 =1,m 1 = m 2 =1 Randomize length of H trajectories to get random sampling Convergence; does sequence actually sample target pdf? June 27, 2002 UQWG 36

Convergence - 2D nonisotropic Gaussians est. var(2) R(2) R(1) est. var(1)/16 Non-isotropic Gaussian target pdf: σ 1 =4,σ 2 =1,m 1 = m 2 =1 control degree of pdf sampling by using short leapfrog steps (τ =0.2) and T max =2 Test statistic R < 1 when estimated variance is deficient June 27, 2002 UQWG 37

16D correlated Gaussian distribution 16D Gaussian pdf related to smoothness prior based on integral of L2 norm of second derivative Efficiency/(function evaluation) = 2.2% with Hybrid (Hamiltonian) algorithm 0.11% or 1.6% with Metropolis; w/o & with covar. adapt. June 27, 2002 UQWG 38

MCMC - Issues Identification of convergence to target pdf is sequence in thermodynamic equilibrium with target pdf? validity of estimated properties of parameters (covariance) Burn in at beginning of sequence, may need to run MCMC for awhile to achieve convergence to target pdf Use of multiple sequences different starting values can help confirm convergence natural choice when using computers with multiple CPUs Accuracy of estimated properties of parameters related to efficiency, described above June 27, 2002 UQWG 39

Conclusions Adjoint differentiation provides efficient calculation of gradient of scalar function of many variables optimization (regression) MCMC, especially hybrid method (other possible uses exist) Hybrid method based on Hamiltonian dynamics efficiency for isotropic Gaussians is about 7% per function evaluation, independent of number of dimensions much better efficiency than Metropolis for large dimensions provided gradient can be efficiently calculated Convergence test based on gradient of -log(probability) tests how well MCMC sequence samples full width of target pdf June 27, 2002 UQWG 40

Bibliography Bayesian Learning for Neural Networks,R.M.Neal,(Springer,1996); Hamiltonian hybrid MCMC Posterior sampling with improved efficiency, K. M. Hanson and G. S. Cunningham, Proc. SPIE 3338, 371-382 (1998); Metropolis examples; includes introduction to MCMC Markov Chain Monte Carlo in Practice, W. R. Gilks et al., (Chapman and Hall, 1996); excellent review; applications Bayesian computation and stochastic systems, J. Besag et al., Stat. Sci. 10,3-66 (1995); MCMC applied to image analysis Inversion based on complex simulations, K. M. Hanson, Maximum Entropy and Bayesian Methods, G. J. Erickson et al., eds., (Kluwer Academic, 1998); describes adjoint differentiation and its usefulness Automatic Differentiation of Algorithms, G. Corliss et al., ed. (Springer, 2002); papers from SIAM workshop on code differentiation Evaluating Derivatives, A. Griewank (SIAM, 2000); excellent hands-on tutorial on code differentiation This presentation available under http://www.lanl.gov/home/kmh/ June 27, 2002 UQWG 41