Markov chain Monte Carlo methods for visual tracking

Similar documents
Human Pose Tracking I: Basics. David Fleet University of Toronto

Density Propagation for Continuous Temporal Chains Generative and Discriminative Models

CSC487/2503: Foundations of Computer Vision. Visual Tracking. David Fleet

Session 3A: Markov chain Monte Carlo (MCMC)

MCMC and Gibbs Sampling. Kayhan Batmanghelich

Tracking. Readings: Chapter 17 of Forsyth and Ponce. Matlab Tutorials: motiontutorial.m, trackingtutorial.m

Linear Dynamical Systems

MONTE CARLO METHODS. Hedibert Freitas Lopes

2D Image Processing (Extended) Kalman and particle filter

Content.

Sensor Fusion: Particle Filter

Tutorial on Probabilistic Programming with PyMC3

Kernel adaptive Sequential Monte Carlo

Introduction to Machine Learning

Bayesian Inference and MCMC

Bayesian Methods for Machine Learning

STA 4273H: Statistical Machine Learning

April 20th, Advanced Topics in Machine Learning California Institute of Technology. Markov Chain Monte Carlo for Machine Learning

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Bayesian Networks BY: MOHAMAD ALSABBAGH

LECTURE 15 Markov chain Monte Carlo

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 13: SEQUENTIAL DATA

Exercises Tutorial at ICASSP 2016 Learning Nonlinear Dynamical Models Using Particle Filters

Condensed Table of Contents for Introduction to Stochastic Search and Optimization: Estimation, Simulation, and Control by J. C.

17 : Optimization and Monte Carlo Methods

STA 4273H: Statistical Machine Learning

Tracking. Readings: Chapter 17 of Forsyth and Ponce. Matlab Tutorials: motiontutorial.m. 2503: Tracking c D.J. Fleet & A.D. Jepson, 2009 Page: 1

Sampling Algorithms for Probabilistic Graphical models

EVALUATING SYMMETRIC INFORMATION GAP BETWEEN DYNAMICAL SYSTEMS USING PARTICLE FILTER

Introduction to Machine Learning CMU-10701

Efficient Monitoring for Planetary Rovers

Part 1: Expectation Propagation

Markov Chain Monte Carlo

A Single Series from the Gibbs Sampler Provides a False Sense of Security

Variable Resolution Particle Filter

Probabilistic Machine Learning

Part IV: Monte Carlo and nonparametric Bayes

A Background Layer Model for Object Tracking through Occlusion

Hamiltonian Monte Carlo for Scalable Deep Learning

Partially Observable Markov Decision Processes (POMDPs)

Bayesian room-acoustic modal analysis

CONDENSATION Conditional Density Propagation for Visual Tracking

STA 4273H: Sta-s-cal Machine Learning

Autonomous Mobile Robot Design

F denotes cumulative density. denotes probability density function; (.)

Robotics. Mobile Robotics. Marc Toussaint U Stuttgart

Machine Learning in Simple Networks. Lars Kai Hansen

Probabilistic Graphical Models Lecture 17: Markov chain Monte Carlo

Results: MCMC Dancers, q=10, n=500

Kernel Sequential Monte Carlo

Uncertainty analysis for the integration of seismic and CSEM data Myoung Jae Kwon & Roel Snieder, Center for Wave Phenomena, Colorado School of Mines

L09. PARTICLE FILTERING. NA568 Mobile Robotics: Methods & Algorithms

Bayes Nets: Sampling

an introduction to bayesian inference

Partially Observable Markov Decision Processes (POMDPs) Pieter Abbeel UC Berkeley EECS

(5) Multi-parameter models - Gibbs sampling. ST440/540: Applied Bayesian Analysis

The Kalman Filter ImPr Talk

Machine Learning. Probabilistic KNN.

Sampling Methods (11/30/04)

Nonparametric Bayesian Methods (Gaussian Processes)

MCMC Sampling for Bayesian Inference using L1-type Priors

Expectation propagation for signal detection in flat-fading channels

CS 343: Artificial Intelligence

Learning latent structure in complex networks

Blind Equalization via Particle Filtering

Machine Learning for OR & FE

State Space and Hidden Markov Models

The Particle Filter. PD Dr. Rudolph Triebel Computer Vision Group. Machine Learning for Computer Vision

Approximate Bayesian Computation: a simulation based approach to inference

PROBABILISTIC REASONING OVER TIME

An introduction to Sequential Monte Carlo

Sequential Monte Carlo Methods for Bayesian Computation

Particle lter for mobile robot tracking and localisation

Variational Methods in Bayesian Deconvolution

The Bias-Variance dilemma of the Monte Carlo. method. Technion - Israel Institute of Technology, Technion City, Haifa 32000, Israel

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Bagging During Markov Chain Monte Carlo for Smoother Predictions

Introduction to Particle Filters for Data Assimilation

Bayesian Networks. instructor: Matteo Pozzi. x 1. x 2. x 3 x 4. x 5. x 6. x 7. x 8. x 9. Lec : Urban Systems Modeling

MCMC: Markov Chain Monte Carlo

Lecture 7 and 8: Markov Chain Monte Carlo

Sampling Rejection Sampling Importance Sampling Markov Chain Monte Carlo. Sampling Methods. Oliver Schulte - CMPT 419/726. Bishop PRML Ch.

Artificial Intelligence

CPSC 540: Machine Learning

2D Image Processing. Bayes filter implementation: Kalman filter

Dirichlet Mixtures, the Dirichlet Process, and the Topography of Amino Acid Multinomial Space. Stephen Altschul

2D Image Processing. Bayes filter implementation: Kalman filter

Physics 403. Segev BenZvi. Numerical Methods, Maximum Likelihood, and Least Squares. Department of Physics and Astronomy University of Rochester

Pattern Recognition and Machine Learning

Recent Advances in Bayesian Inference Techniques

Robotics 2 Target Tracking. Kai Arras, Cyrill Stachniss, Maren Bennewitz, Wolfram Burgard

COURSE INTRODUCTION. J. Elder CSE 6390/PSYC 6225 Computational Modeling of Visual Perception

STA 4273H: Statistical Machine Learning

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2016

27 : Distributed Monte Carlo Markov Chain. 1 Recap of MCMC and Naive Parallel Gibbs Sampling

Inference Control and Driving of Natural Systems

SAMPLING ALGORITHMS. In general. Inference in Bayesian models

The Unscented Particle Filter

CS242: Probabilistic Graphical Models Lecture 7B: Markov Chain Monte Carlo & Gibbs Sampling

Auxiliary Particle Methods

Transcription:

Markov chain Monte Carlo methods for visual tracking Ray Luo rluo@cory.eecs.berkeley.edu Department of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley, CA 94720

2 2 PROBABILISTIC FOUNDATION Abstract Tracking articulated figures in high dimensional state spaces require tractable methods for inferring posterior distributions of joint locations, angles, and occlusion parameters. Markov chain Monte Carlo (MCMC) methods are efficient sampling procedures for approximating probability distributions. We apply MCMC to the domain of people tracking and investigate a general framework for sample-approximation tracking based on the Particle Filter, MCMC, and simulated annealing. A tutorial discussion of MCMC is provided. 1 Introduction Tracking complex moving emsembles in noisy environments with highly quantized video sequences produces difficult computational problems that cannot be fixed simply by making the object or motion models more complex. Complicated object models involve a greater number of state variables that parameterize the object orientation or position. The high dimensionality of the state space makes probabilistic inference for the posterior distribution of the object location or angle intractable. Detailed motion models fail miserably when asked to track a similar but different motion sequence. For example, built-in walking models for tracking walking humans fail when the subject turns a corner [10]. Conventional approaches to visual tracking involves a sampling and resampling procedure commonly known as Condensation or Particle Filtering [4]. Unlike linear prediction algorithms like the Kalman Filter [5], Particle Filters do not assume Gaussian transition and observation noise. Its problem lies in the exponentially increasing number of particles needed for high dimensional noisy problems. Markov chain Monte Carlo (MCMC) is a set of techniques for efficient sampling of probability distributions. We apply MCMC to the domain of people tracking to construct better, more efficient approximations to the posterior distributions of states. We begin with a review of tracking as a probabilistic prediction and inference problem. After discussing the Particle Filter, we move on to MCMC methods, culminating in the hybrid Monte Carlo method that was implemented. Finally, we describe each piece of the tracker as a Bayesian filtering step and discuss extensions and future research directions. 2 Probabilistic foundation Let S t K be a vector of variables we want to track, e.g. torso position, joint angle, and body orientation. K is the appropriate domain for each variable, e.g. [0, π] for arm elbow angle and R 3 for location of body centroid. In general, each ith variable S i t can be a vector in a vector space. Our observed data at time t is a vector D t. At any given time t we are given the observations D 0:t = (D 0, D 1,..., D t ). We re interested in the posterior distribution p(s t D 0:t ) which assigns a probability to each configuration S t = S t.

2.1 Bayesian filtering 3 2.1 Bayesian filtering We factor the posterior distribution by Bayes Rule [7], p(s t D 0:t ) = p(d t S t, D 0:t 1 ) p(s t D 0:t 1 ) p(d t D 0:t 1 ) = α p(d t S t, D 0:t 1 ) p(s t D 0:t 1 ), where p(d t D 0:t 1 ) is the constant α 1 set after calculations to a value that normalizes the distribution. Next we make some assumptions about the observations to simplify our problem. Given our current state, past observations should be independent of present observations. For example, suppose your little brother puts an indeterminant number of coins into the piggy bank every week. Your state estimate is the observed number of coins from the previous week. Then your observations this week is independence of past observations as long as your brother is a capricious money saver. Note that in this case, the state is simply a record of observations. In general, the state is a function of the observations, but the independence relationship of the observations may still hold. This gives us a factorization, p(s t D 0:t ) = α p(d t S t ) p(s t D 0:t 1 ), (1) where p(d t S t ) is the likelihood function and p(s t D 0:t 1 ) is the prediction distribution which represents current state estimates given the past observations only. Equation (1) specifies the correction made to S t given the data. It represents the control process of our dynamical system due, for example, to sensory feedback and temporal corrections. Marginalize over the previous state to get the prediction distribution (or, a.k.a. the temporal prior), p(s t D 0:t 1 ) = p(s t, S t 1 D 0:t 1 ) S t 1 = p(s t S t 1, D 0:t 1 ) p(s t 1 D 0:t 1 ). S t 1 Let us now make an assumption regarding our state description of the dynamical system. In particular, we assume that the future state of the system is independent of the past observations given the present state. That is, the present state captures everything we need to know about to calculate the state probabilities at the next time step. This is a first order Markov assumption applied to Bayesian filtering. Note that the assumption has nothing to do with the underlying physical process. Given a rich enough state description and accurate sensor readings, we can always satisfy the Markov assumption to a sufficient degree. Now the prediction is p(s t D 0:t 1 ) = p(s t S t 1 ) p(s t 1 D 0:t 1 ), (2) S t 1

4 3 MARKOV CHAIN MONTE CARLO where p(s t S t 1 ) is the dynamic or transition probability and p(s t 1 D 0:t 1 ) is the prior distribution, i.e. the posterior from the previous time step. From (1) and (2), we get a recursive formulation of the posterior state distribution that depends only on previous and current observations. Starting with a flat initial prior that assigns uniform probability to every state configuration, we propagate S t 1 forward in time according to our dynamical model, examine the probability of the new data given the propagated S t, and calculate the new posterior based on the on the temporal prior and the likelihood. The space complexity of the general algorithm depends on the size of S t 2.2 Sample approximation The Monte Carlo principle approximates a probability distribution using a weighted set of delta functions [8]. 2.3 Particle Filtering Kalman filtering [5]. Particle filtering of Isard and Blake [4]. 3 Markov chain Monte Carlo 3.1 Gibbs sampling Geman and Geman were the first to apply Gibbs sampling to an image restoration problem [3]. 3.2 Metropolis algorithm Metropolis et al. used a symmetric proposal distribution [9]. 3.3 Hybrid Monte Carlo An auxiliary variable sampler samples from an augmented distribution and obtains the desired sample approximation by marginalizing over unwanted variables. The most popular auxiliary variables algorithm is the hybrid Monte Carlo filter first introduced in the thermal physics community by Duane [2]. Duane et al. showed that the MCMC can be combined with the Metropolis test. 3.4 Simulated annealing Kirkpatrick et al. first applied simulated annealing to problems in statistical physics [6].

5 4 Human tracker Choo and Fleet built a people tracker based on hybrid Monte Carlo sampling of the posterior [1]. 4.1 Likelihood 4.2 Dynamics 4.3 Temporal prior 5 Results 6 Conclusions Acknowledgments The author would like to thank David J. Fleet and Eunice Poon. References [1] K. Choo and D. J. Fleet. Tracking people using hybrid Monte Carlo. In International Conference on Computer Vision, volume II, pages 321 328, Vancouver, Canada, 2001. [2] S. Duane, A. D. Kennedy, B. J. Pendleton, and D. Roweth. Hybrid Monte Carlo. Physics Letters B, 195(2):216 222, 1987. [3] S. Geman and D. Geman. Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 6(6):721 741, November 1984. [4] M. Isard and A. Blake. Contour tracking by stochastic propagation of conditional density. In European Conference on Computer Vision, pages 343 356, Cambridge, UK, 1996. [5] R. E. Kalman. A new approach to linear filtering and prediction problems. Transactions of the American Society of Mechanical Engineers - Journal of Basic Engineering, 82(D):35 45, 1960. [6] S. Kirkpatrick, Jr. C. D. Gelatt, and M. P. Vecchi. Optimization by simulated annealing. Science, 220(4598):671 680, May 1983. [7] A. Leon-Garcia. Probability and Random Processes for Electrical Engineering. Addison-Wesley, Reading, Massachusetts, 2nd edition, 1994.

6 REFERENCES [8] J. S. Liu and R. Chen. Sequential Monte Carlo methods for dynamical systems. Journal of the American Statistical Association, 93:1032 1044, 1998. [9] N. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A. H. Teller, and E. Teller. Equation of state calculations by fast computing machines. Journal of Chemical Physics, 21(6):1087 1092, June 1953. [10] H. Sidenbladh, M. J. Black, and D. J. Fleet. Stochastic tracking of 3d human figures using 2d image motion. In European Conference on Computer Vision, volume II, pages 702 718, Dublin, Ireland, 2000.