A Backward Particle Interpretation of Feynman-Kac Formulae
|
|
- Harvey O’Neal’
- 5 years ago
- Views:
Transcription
1 A Backward Particle Interpretation of Feynman-Kac Formulae P. Del Moral Centre INRIA de Bordeaux - Sud Ouest Workshop on Filtering, Cambridge Univ., June 14-15th 2010 Preprints (with hyperlinks), joint works with A. Doucet & S.S. Singh: A Backward Particle Interpretation of Feynman-Kac Formulae HAL-INRIA RR-7019 (2009), to appear in M2AN (2010). Forward Smoothing Using Sequential Monte Carlo CUED/F-INFENG/TR.638. Cambridge University, Engineering Dpt. (2009). P. Del Moral (INRIA) INRIA Bordeaux-Sud Ouest 1 / 26
2 Outline 1 Introduction, motivations 2 Nonlinear Markov models 3 Some convergence results 4 Additive functionals P. Del Moral (INRIA) INRIA Bordeaux-Sud Ouest 2 / 26
3 Summary 1 Introduction, motivations Some notation Feynman-Kac measures Filtering and sensitivity analysis 2 Nonlinear Markov models 3 Some convergence results 4 Additive functionals P. Del Moral (INRIA) INRIA Bordeaux-Sud Ouest 3 / 26
4 Some notation E measurable state space, P(E) proba. on E, B(E) bounded meas. functions (µ, f ) P(E) B(E) µ(f ) = µ(dx) f (x) M(x, dy) integral operator over E M(f )(x) = M(x, dy)f (y) [µm](dy) = µ(dx)m(x, dy) (= [µm](f ) = µ[m(f )] ) Bayes-Boltzmann-Gibbs transformation : G : E [0, [ with µ(g) > 0 Ψ G (µ)(dx) = 1 µ(g) G(x) µ(dx) E finite Vector-Matrix notation µ = [µ(1),..., µ(d)] and f = [f (1),..., f (d)] P. Del Moral (INRIA) INRIA Bordeaux-Sud Ouest 4 / 26
5 Feynman-Kac measures Markov X n with transitions M n (x n 1, dx n ) on some state space E n. Potential functions G n : x n E n G n (x n ) [0, [ Feynman-Kac path measures: dq n := 1 G p (X p ) Z n dp n with P n := Law(X 0,..., X n ) 0 p<n The n-time marginals: f n B(E n ) η n (f n ) := γ n(f n ) γ n (1) with γ n (f n ) := E f n (X n ) G p (X p ) 0 p<n Updated measures Q n, η n, and γ n w.r.t. the product 0 p n P. Del Moral (INRIA) INRIA Bordeaux-Sud Ouest 5 / 26
6 Example : Nonlinear filtering P ((X n, Y n ) d(x, y) (X n 1, Y n 1 )) := M n (X n 1, dx) g n (x, y) λ Y n (dy) Given the observation sequence Y = y with G n (x n ) = g n (x n, y n ) η n = Law(X n 0 p n Y p = y p ) γ n (1) = p n (y 0,..., y n ) ( density w.r.t. 0 p n λ Y ) p In path space settings smoothing and path estimation Q n = Law((X 0,..., X n ) 0 p n Y p = y p ) P. Del Moral (INRIA) INRIA Bordeaux-Sud Ouest 6 / 26
7 Sensitivity analysis Expected Maximization θ M n (x n 1, dx n ) = p θ n(x n 1, x n ) λ X n (dx n ) and g θ n (x, y) λ Y n (dy) Parametric Feynman-Kac models : (G n, Q n, γ n, η n,...) (G θ n, Q θ n, γ θ n, η θ n,...) θ log γθ n(1) = Q θ n(f θ n ) with the additive functional and F θ n (x 0,..., x n ) := θ Q θ n(ϕ n ) = Q ( θ n Fn θ [ϕ n Q ) θ n(ϕ n )] 0 k n f θ k (x k 1, x k ) fk θ (x k 1, x p ) := log G p θ (x k ) + log pθ k (x k 1, x k ) θ θ Expected Maximization min entropy (additive fct. F θ n θ E n (θ, θ ) := Q θ n ( log (d Q θ n/d without derivatives) )) θ Q n Q θ n(fn θ ) P. Del Moral (INRIA) INRIA Bordeaux-Sud Ouest 7 / 26
8 3 Key observations η n (f n ) := γ n(f n ) γ n (1) with γ n (f n ) := E f n (X n ) G p (X p ) 0 p<n Path space models: [ X n := (X 0,..., X n) & G n (X n ) := G n(x n)] = η n = Q n A multiplicative formula: Z n = E G p (X p ) = η p (G p ) 0 p<n 0 p<n Proof: Z n := γ n (1) = γ n 1 (G n 1 ) = η n 1 (G n 1 ) γ n 1 (1) P. Del Moral (INRIA) INRIA Bordeaux-Sud Ouest 8 / 26
9 3 Key observations Path space models: M n (X n 1, dx n ) = P (X n dx n X n 1 ) = H n (X n 1, x n ) λ n (dx n ) Q n (d(x 0,..., x n )) = η n (dx n ) M n,ηn 1 (x n, dx n 1 )... M 1,η0 (x 1, dx 0 ) with the backward Markov transitions : M n,ηn 1 (x n, dx n 1 ) G n 1 (x n 1 ) H n (x n 1, x n ) η n 1 (dx n 1 ) P. Del Moral (INRIA) INRIA Bordeaux-Sud Ouest 9 / 26
10 Summary 1 Introduction, motivations 2 Nonlinear Markov models McKean distribution models Mean field particle interpretations The 4 types of particle approximation measures 3 Some convergence results 4 Additive functionals P. Del Moral (INRIA) INRIA Bordeaux-Sud Ouest 10 / 26
11 Nonlinear Markov models Flows of Feynman-Kac measures A two step correction prediction model η n Updating-correction Prediction/Markov transport η n = Ψ Gn (η n ) η n+1 = η n M n+1 Selection nonlinear transport formulae Ψ Gn (η n ) = η n S n,ηn with, for any ɛ n = ɛ n (η n ) [0, 1] s.t. ɛ n G n 1 S n,ηn (x,.) := ɛ n G n (x) δ x + (1 ɛ n G n (x)) Ψ Gn (η n ) η n+1 = η n (S n,ηn M n+1 ) := η n K n+1,ηn P. Del Moral (INRIA) INRIA Bordeaux-Sud Ouest 11 / 26
12 Nonlinear Markov chains η n = Law(X n )=Perfect sampling algorithm Nonlinear transport formulae : η n+1 = η n K n+1,ηn with the collection of Markov probability transitions : K n+1,ηn = S n,ηn M n+1 Local transitions : P(X n dx n X n 1 ) = K n,η n 1 (X n 1, dx n ) avec η n 1 = Law(X n 1 ) McKean measures (canonical process) : P n (d(x 0,..., x n )) = η 0 (dx 0 ) K 1,η0 (x 0, dx 1 )... K n,ηn 1 (x n 1, dx n ) P. Del Moral (INRIA) INRIA Bordeaux-Sud Ouest 12 / 26
13 Mean field particle interpretations Sampling pb Mean field particle interpretations Markov Chain ξ n = (ξn, 1..., ξn N ) En N s.t. η N n := 1 δ N ξ i n N η n 1 i N Approximated local transitions ( 1 i N) ξn 1 i ξn i K n,η N n 1 (ξn 1, i dx n ) P. Del Moral (INRIA) INRIA Bordeaux-Sud Ouest 13 / 26
14 Schematic picture : ξ n E N n ξ n+1 E N n+1 Rationale : ξ 1 n. ξ i n. ξ N n K n+1,η N n ξ 1 n+1. ξ i n+1. ξ N n+1 η N n N η n = K n+1,η N n N K n+1,ηn = ξ i i.i.d. copies of X Particle McKean measures : 1 N N δ (ξ i 0,...,ξn i ) N Law(X 0,..., X n ) i=1 P. Del Moral (INRIA) INRIA Bordeaux-Sud Ouest 14 / 26
15 Some key advantages Mean field models=stochastic linearization/perturbation technique : η N n = η N n 1K n,η N n N W N n avec Wn N W n Centered Gaussian Fields. η n = η n 1 K n,ηn 1 stable No propagation of local sampling errors = Uniform control w.r.t. the time horizon No burning, no need to study the stability of MCMC models. Stochastic adaptive grid approximation Nonlinear system positive-benefic interactions. Simple and natural sampling algorithm. P. Del Moral (INRIA) INRIA Bordeaux-Sud Ouest 15 / 26
16 Feynman-Kac models Genetic type stochastic algo. ξṇ 1 ξ 1 M n+1 n ξn+1 1. S ξṇ i n,η N n.. ξ n i ξn+1 i... ξn N ξn+1 N Acceptance/Rejection-Selection : [Geometric type clocks] ξ N n S n,η N n (ξ i n, dx) := ɛ n G n (ξn) i δ ξ i n (dx) + ( 1 ɛ n G n (ξn) ) i N G n(ξ j P n ) j=1 N k=1 Gn(ξk )δ ξ j (dx) n n Ex. : G n = 1 A G n (ξ i n) = 1 A (ξ i n) P. Del Moral (INRIA) INRIA Bordeaux-Sud Ouest 16 / 26
17 Interaction/branch. process 4 types of occupation measures (N = 3) = = = Current population 1 N Genealogical tree 1 N N i=1 N i=1 Unbias particle normalizing Constants Zn N := ηp N (G p ) 0 p<n δ ξ i n i-th individual at time n η n δ (ξ i 0,n,ξ i 1,n,...,ξi n,n) i-th ancestral line Q n 0 p<n η p (G p ) = Z n P. Del Moral (INRIA) INRIA Bordeaux-Sud Ouest 17 / 26
18 Interaction/branch. process 4 types of occupation measures (N = 3) = = = Complete genealogical tree McKean measures 1 N N i=1 δ (ξ i 0,ξ i 1,...,ξi n) η 0(dx 0 )K 1,η0 (x 0, dx 1 )... K n,ηn 1 (x n 1, dx n ) Backward Feynman-Kac path measures [ elementary Matrices operations] Q N n (d(x 0,..., x n )) = ηn N (dx n ) M n,η N n 1 (x n, dx n 1 )... M 1,η N 0 (x 1, dx 0 ) η n (dx n ) M n,ηn 1 (x n, dx n 1 )... M 1,η0 (x 1, dx 0 ) = Q n (d(x 0,..., x n )) P. Del Moral (INRIA) INRIA Bordeaux-Sud Ouest 18 / 26
19 Equivalent Stochastic Algorithms : Genetic and evolutionary type algorithms. Spatial branching models. Sequential Monte Carlo methods. Population Monte Carlo models. Diffusion Monte Carlo (DMC), Quantum Monte Carlo (QMC),... Some botanical names = application domain areas : particle filters, bootsrapping, selection, pruning-enrichment, reconfiguration, cloning, go with the winner, spawning, condensation, grouping, rejuvenations, harmony searches, biomimetics, splitting, [(Meta)Heuristics] 1996 Feynman-Kac mean field particle model P. Del Moral (INRIA) INRIA Bordeaux-Sud Ouest 19 / 26
20 Summary 1 Introduction, motivations 2 Nonlinear Markov models 3 Some convergence results Non asymptotic theorems Unnormalized models 4 Additive functionals P. Del Moral (INRIA) INRIA Bordeaux-Sud Ouest 20 / 26
21 Asympt. theo. TCL,PGD, PDM (n,n) fixed some examples : Empirical processes : sup sup N E( η N n η n p F n ) < n 0 N 1 Concentration inequalities uniform w.r.t. time : sup P( ηn N (f n ) η n (f n ) > ɛ) c exp (Nɛ 2 )/(2 σ 2 ) n 0 + Guionnet sup n 0 (IHP 01) & Ledoux sup Fn (JTP 00) & Rio AAP 10 Propagations of chaos exansions (+Patras,Rubenthaler (AAP 09)) : P N n,q := Loi(ξ 1 n,..., ξ q n) η q n + 1 N 1 P n,q N k k P n,q + 1 N k+1 k+1 P N n,q with sup N 1 k+1 P N n,q tv < & sup n 0 1 P n,q tv c q 2. P. Del Moral (INRIA) INRIA Bordeaux-Sud Ouest 21 / 26
22 Unnormalized models Un-bias particle approximation measures γn N (f n ) := ηn N (f n ) ηp N (G p ) 0 p<n Asymptotic theorms : fluctuations & deviations + A. Guionnet (AAP 99, SPA 98), + L. Miclo (SP 2000), + D. Dawson Non asymptotic theory : bias and variance estimates 1 Taylor type expansion (+Patras & Rubenthaler (AAP 09) : E ( (γ N n ) q (F ) ) =: Q N n,q(f ) = γ q n (F ) + 1 k (q 1)(n+1) 1 N k k Q n,q (F ) 2 Variance estimates (+Cerou & Guyader Hal-INRIA 08 & IPH 2010) : ( [γ N E n (f n ) γ n (f n ) ] ) 2 c n N γ n(1) 2 P. Del Moral (INRIA) INRIA Bordeaux-Sud Ouest 22 / 26
23 Summary 1 Introduction, motivations 2 Nonlinear Markov models 3 Some convergence results 4 Additive functionals P. Del Moral (INRIA) INRIA Bordeaux-Sud Ouest 23 / 26
24 Additive funct. (+ Doucet & Singh Hal-INRIA 09 & M2AN 2010) Path space models P n := Law(X 0,..., X n ) dq n := 1 G p (X p ) Z n dp n 0 p<n Hyp. : M n (x n 1, dx n ) = H n (x n 1, x n ) λ n (dx n ) Q n (d(x 0,..., x n )) = η n (dx n ) M n,ηn 1 (x n, dx n 1 )... M 1,η0 (x 1, dx 0 ) with the backward transitions : M p+1,η (x, dx ) G p (x ) H p+1 (x, x) η(dx ) Particle estimates complete genealogical tree : Q N n (d(x 0,..., x n )) = η N n (dx n ) M n,η N n 1 (x n, dx n 1 )... M 1,η N 0 (x 1, dx 0 ) P. Del Moral (INRIA) INRIA Bordeaux-Sud Ouest 24 / 26
25 2 type of path-space estimates Complete genealogical tree = McKean meas. FK-Path space 1 N N δ (ξ i 0,...,ξn i ) N Loi(X 0,..., X n ) & Q N n N Q n i=1 Simple genealogical tree = FK-Path space N η N n = 1 N i=1 δ (ξ i 0,n,ξ i 1,n,...,ξi n,n ) N Q n = η n Main problem : Path degeneracy w.r.t. time horizon (as any genetic ancestral tree) Roughly : Uniform estimates linear estimates w.r.t. the time horizon P. Del Moral (INRIA) INRIA Bordeaux-Sud Ouest 25 / 26
26 Some non asymptotic estimates Additive functional : F n (x 0,..., x n ) = 1 n p n f p (x p ) Bias estimate+uniform L p -bounds + variance (1/N 2 ) N E ( [(Q N n Q n )(F n )] 2) c 1/n +1/N }{{} bias term Uniform exponential concentration 1 log sup P N n 0 ( [Q N n Q n ](F n ) b ) + ɛ ɛ 2 /(2b 2 ) N P. Del Moral (INRIA) INRIA Bordeaux-Sud Ouest 26 / 26
An introduction to particle simulation of rare events
An introduction to particle simulation of rare events P. Del Moral Centre INRIA de Bordeaux - Sud Ouest Workshop OPUS, 29 juin 2010, IHP, Paris P. Del Moral (INRIA) INRIA Bordeaux-Sud Ouest 1 / 33 Outline
More informationConcentration inequalities for Feynman-Kac particle models. P. Del Moral. INRIA Bordeaux & IMB & CMAP X. Journées MAS 2012, SMAI Clermond-Ferrand
Concentration inequalities for Feynman-Kac particle models P. Del Moral INRIA Bordeaux & IMB & CMAP X Journées MAS 2012, SMAI Clermond-Ferrand Some hyper-refs Feynman-Kac formulae, Genealogical & Interacting
More informationAdvanced Monte Carlo integration methods. P. Del Moral (INRIA team ALEA) INRIA & Bordeaux Mathematical Institute & X CMAP
Advanced Monte Carlo integration methods P. Del Moral (INRIA team ALEA) INRIA & Bordeaux Mathematical Institute & X CMAP MCQMC 2012, Sydney, Sunday Tutorial 12-th 2012 Some hyper-refs Feynman-Kac formulae,
More informationMean field simulation for Monte Carlo integration. Part II : Feynman-Kac models. P. Del Moral
Mean field simulation for Monte Carlo integration Part II : Feynman-Kac models P. Del Moral INRIA Bordeaux & Inst. Maths. Bordeaux & CMAP Polytechnique Lectures, INLN CNRS & Nice Sophia Antipolis Univ.
More informationA new class of interacting Markov Chain Monte Carlo methods
A new class of interacting Marov Chain Monte Carlo methods P Del Moral, A Doucet INRIA Bordeaux & UBC Vancouver Worshop on Numerics and Stochastics, Helsini, August 2008 Outline 1 Introduction Stochastic
More informationMean field simulation for Monte Carlo integration. Part I : Intro. nonlinear Markov models (+links integro-diff eq.) P. Del Moral
Mean field simulation for Monte Carlo integration Part I : Intro. nonlinear Markov models (+links integro-diff eq.) P. Del Moral INRIA Bordeaux & Inst. Maths. Bordeaux & CMAP Polytechnique Lectures, INLN
More informationContraction properties of Feynman-Kac semigroups
Journées de Statistique Marne La Vallée, January 2005 Contraction properties of Feynman-Kac semigroups Pierre DEL MORAL, Laurent MICLO Lab. J. Dieudonné, Nice Univ., LATP Univ. Provence, Marseille 1 Notations
More informationOn the stability and the applications of interacting particle systems
1/42 On the stability and the applications of interacting particle systems P. Del Moral INRIA Bordeaux Sud Ouest PDE/probability interactions - CIRM, April 2017 Synthesis joint works with A.N. Bishop,
More informationOn the Concentration Properties of Interacting Particle Processes. Contents
Foundations and Trends R in Machine Learning Vol. 3, Nos. 3 4 (2010) 225 389 c 2012 P. Del Moral, P. Hu, and L. Wu DOI: 10.1561/2200000026 On the Concentration Properties of Interacting Particle Processes
More informationA Backward Particle Interpretation of Feynman-Kac Formulae
A Backward Particle Interpretation of Feynman-Kac Formulae Pierre Del Moral, Arnaud Doucet, Sumeetpal Singh To cite this version: Pierre Del Moral, Arnaud Doucet, Sumeetpal Singh. A Backward Particle Interpretation
More informationAuxiliary Particle Methods
Auxiliary Particle Methods Perspectives & Applications Adam M. Johansen 1 adam.johansen@bristol.ac.uk Oxford University Man Institute 29th May 2008 1 Collaborators include: Arnaud Doucet, Nick Whiteley
More informationOn the concentration properties of Interacting particle processes
ISTITUT ATIOAL DE RECHERCHE E IFORMATIQUE ET E AUTOMATIQUE arxiv:1107.1948v2 [math.a] 12 Jul 2011 On the concentration properties of Interacting particle processes Pierre Del Moral, Peng Hu, Liming Wu
More informationStability of Feynman-Kac Semigroup and SMC Parameters' Tuning
Stability of Feynman-Kac Semigroup and SMC Parameters' Tuning François GIRAUD PhD student, CEA-CESTA, INRIA Bordeaux Tuesday, February 14th 2012 1 Recalls and context of the analysis Feynman-Kac semigroup
More informationFull text available at: On the Concentration Properties of Interacting Particle Processes
On the Concentration Properties of Interacting Particle Processes On the Concentration Properties of Interacting Particle Processes Pierre Del Moral INRIA, Université Bordeaux 1 France Peng Hu INRIA, Université
More informationSequential Monte Carlo Methods for Bayesian Computation
Sequential Monte Carlo Methods for Bayesian Computation A. Doucet Kyoto Sept. 2012 A. Doucet (MLSS Sept. 2012) Sept. 2012 1 / 136 Motivating Example 1: Generic Bayesian Model Let X be a vector parameter
More informationPerfect simulation algorithm of a trajectory under a Feynman-Kac law
Perfect simulation algorithm of a trajectory under a Feynman-Kac law Data Assimilation, 24th-28th September 212, Oxford-Man Institute C. Andrieu, N. Chopin, A. Doucet, S. Rubenthaler University of Bristol,
More informationAn Brief Overview of Particle Filtering
1 An Brief Overview of Particle Filtering Adam M. Johansen a.m.johansen@warwick.ac.uk www2.warwick.ac.uk/fac/sci/statistics/staff/academic/johansen/talks/ May 11th, 2010 Warwick University Centre for Systems
More informationOn Adaptive Resampling Procedures for Sequential Monte Carlo Methods
On Adaptive Resampling Procedures for Sequential Monte Carlo Methods Pierre Del Moral, Arnaud Doucet, Ajay Jasra To cite this version: Pierre Del Moral, Arnaud Doucet, Ajay Jasra. On Adaptive Resampling
More informationA Backward Particle Interpretation of Feynman-Kac Formulae
A Backward Particle Interpretation of Feynman-Kac Formulae Pierre Del Moral, Arnaud Doucet, Sumeetpal Singh To cite this version: Pierre Del Moral, Arnaud Doucet, Sumeetpal Singh. A Backward Particle Interpretation
More informationIntroduction. log p θ (y k y 1:k 1 ), k=1
ESAIM: PROCEEDINGS, September 2007, Vol.19, 115-120 Christophe Andrieu & Dan Crisan, Editors DOI: 10.1051/proc:071915 PARTICLE FILTER-BASED APPROXIMATE MAXIMUM LIKELIHOOD INFERENCE ASYMPTOTICS IN STATE-SPACE
More informationImportance splitting for rare event simulation
Importance splitting for rare event simulation F. Cerou Frederic.Cerou@inria.fr Inria Rennes Bretagne Atlantique Simulation of hybrid dynamical systems and applications to molecular dynamics September
More informationOn the Robustness of the Snell envelope
On the Robustness of the Snell envelope Pierre Del Moral, Peng Hu, adia Oudjane, Bruno Rémillard To cite this version: Pierre Del Moral, Peng Hu, adia Oudjane, Bruno Rémillard On the Robustness of the
More informationSTABILITY AND UNIFORM APPROXIMATION OF NONLINEAR FILTERS USING THE HILBERT METRIC, AND APPLICATION TO PARTICLE FILTERS 1
The Annals of Applied Probability 0000, Vol. 00, No. 00, 000 000 STABILITY AND UNIFORM APPROXIMATION OF NONLINAR FILTRS USING TH HILBRT MTRIC, AND APPLICATION TO PARTICL FILTRS 1 By François LeGland and
More informationPart 1: Expectation Propagation
Chalmers Machine Learning Summer School Approximate message passing and biomedicine Part 1: Expectation Propagation Tom Heskes Machine Learning Group, Institute for Computing and Information Sciences Radboud
More informationParticle Filtering Approaches for Dynamic Stochastic Optimization
Particle Filtering Approaches for Dynamic Stochastic Optimization John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge I-Sim Workshop,
More informationMarkov Chain Monte Carlo Methods for Stochastic
Markov Chain Monte Carlo Methods for Stochastic Optimization i John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge U Florida, Nov 2013
More informationBasic math for biology
Basic math for biology Lei Li Florida State University, Feb 6, 2002 The EM algorithm: setup Parametric models: {P θ }. Data: full data (Y, X); partial data Y. Missing data: X. Likelihood and maximum likelihood
More informationCPSC 540: Machine Learning
CPSC 540: Machine Learning MCMC and Non-Parametric Bayes Mark Schmidt University of British Columbia Winter 2016 Admin I went through project proposals: Some of you got a message on Piazza. No news is
More informationParticle Filters: Convergence Results and High Dimensions
Particle Filters: Convergence Results and High Dimensions Mark Coates mark.coates@mcgill.ca McGill University Department of Electrical and Computer Engineering Montreal, Quebec, Canada Bellairs 2012 Outline
More informationMarkov Chain Monte Carlo Methods for Stochastic Optimization
Markov Chain Monte Carlo Methods for Stochastic Optimization John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge U of Toronto, MIE,
More informationBayesian Regression Linear and Logistic Regression
When we want more than point estimates Bayesian Regression Linear and Logistic Regression Nicole Beckage Ordinary Least Squares Regression and Lasso Regression return only point estimates But what if we
More informationAn introduction to Sequential Monte Carlo
An introduction to Sequential Monte Carlo Thang Bui Jes Frellsen Department of Engineering University of Cambridge Research and Communication Club 6 February 2014 1 Sequential Monte Carlo (SMC) methods
More informationPerturbed Proximal Gradient Algorithm
Perturbed Proximal Gradient Algorithm Gersende FORT LTCI, CNRS, Telecom ParisTech Université Paris-Saclay, 75013, Paris, France Large-scale inverse problems and optimization Applications to image processing
More informationLecture 7 and 8: Markov Chain Monte Carlo
Lecture 7 and 8: Markov Chain Monte Carlo 4F13: Machine Learning Zoubin Ghahramani and Carl Edward Rasmussen Department of Engineering University of Cambridge http://mlg.eng.cam.ac.uk/teaching/4f13/ Ghahramani
More informationWeak convergence of Markov chain Monte Carlo II
Weak convergence of Markov chain Monte Carlo II KAMATANI, Kengo Mar 2011 at Le Mans Background Markov chain Monte Carlo (MCMC) method is widely used in Statistical Science. It is easy to use, but difficult
More informationControlled sequential Monte Carlo
Controlled sequential Monte Carlo Jeremy Heng, Department of Statistics, Harvard University Joint work with Adrian Bishop (UTS, CSIRO), George Deligiannidis & Arnaud Doucet (Oxford) Bayesian Computation
More informationComputer Intensive Methods in Mathematical Statistics
Computer Intensive Methods in Mathematical Statistics Department of mathematics johawes@kth.se Lecture 7 Sequential Monte Carlo methods III 7 April 2017 Computer Intensive Methods (1) Plan of today s lecture
More informationConvergence of Quantum Statistical Experiments
Convergence of Quantum Statistical Experiments M!d!lin Gu"! University of Nottingham Jonas Kahn (Paris-Sud) Richard Gill (Leiden) Anna Jen#ová (Bratislava) Statistical experiments and statistical decision
More informationSequential Monte Carlo Samplers for Applications in High Dimensions
Sequential Monte Carlo Samplers for Applications in High Dimensions Alexandros Beskos National University of Singapore KAUST, 26th February 2014 Joint work with: Dan Crisan, Ajay Jasra, Nik Kantas, Alex
More informationComputer Intensive Methods in Mathematical Statistics
Computer Intensive Methods in Mathematical Statistics Department of mathematics johawes@kth.se Lecture 16 Advanced topics in computational statistics 18 May 2017 Computer Intensive Methods (1) Plan of
More informationAn introduction to stochastic particle integration methods: with applications to risk and insurance
An introduction to stochastic particle integration methods: with applications to risk and insurance P. Del Moral, G. W. Peters, C. Vergé Abstract This article presents a guided introduction to a general
More informationSMC 2 : an efficient algorithm for sequential analysis of state-space models
SMC 2 : an efficient algorithm for sequential analysis of state-space models N. CHOPIN 1, P.E. JACOB 2, & O. PAPASPILIOPOULOS 3 1 ENSAE-CREST 2 CREST & Université Paris Dauphine, 3 Universitat Pompeu Fabra
More informationMultilevel Sequential 2 Monte Carlo for Bayesian Inverse Problems
Jonas Latz 1 Multilevel Sequential 2 Monte Carlo for Bayesian Inverse Problems Jonas Latz Technische Universität München Fakultät für Mathematik Lehrstuhl für Numerische Mathematik jonas.latz@tum.de November
More informationRandom Eigenvalue Problems Revisited
Random Eigenvalue Problems Revisited S Adhikari Department of Aerospace Engineering, University of Bristol, Bristol, U.K. Email: S.Adhikari@bristol.ac.uk URL: http://www.aer.bris.ac.uk/contact/academic/adhikari/home.html
More informationMCMC: Markov Chain Monte Carlo
I529: Machine Learning in Bioinformatics (Spring 2013) MCMC: Markov Chain Monte Carlo Yuzhen Ye School of Informatics and Computing Indiana University, Bloomington Spring 2013 Contents Review of Markov
More informationBoltzmann-Gibbs Preserving Langevin Integrators
Boltzmann-Gibbs Preserving Langevin Integrators Nawaf Bou-Rabee Applied and Comp. Math., Caltech INI Workshop on Markov-Chain Monte-Carlo Methods March 28, 2008 4000 atom cluster simulation Governing Equations
More informationMonte Carlo Methods. Leon Gu CSD, CMU
Monte Carlo Methods Leon Gu CSD, CMU Approximate Inference EM: y-observed variables; x-hidden variables; θ-parameters; E-step: q(x) = p(x y, θ t 1 ) M-step: θ t = arg max E q(x) [log p(y, x θ)] θ Monte
More informationConsistency of the maximum likelihood estimator for general hidden Markov models
Consistency of the maximum likelihood estimator for general hidden Markov models Jimmy Olsson Centre for Mathematical Sciences Lund University Nordstat 2012 Umeå, Sweden Collaborators Hidden Markov models
More informationA Note on Auxiliary Particle Filters
A Note on Auxiliary Particle Filters Adam M. Johansen a,, Arnaud Doucet b a Department of Mathematics, University of Bristol, UK b Departments of Statistics & Computer Science, University of British Columbia,
More informationExercises Tutorial at ICASSP 2016 Learning Nonlinear Dynamical Models Using Particle Filters
Exercises Tutorial at ICASSP 216 Learning Nonlinear Dynamical Models Using Particle Filters Andreas Svensson, Johan Dahlin and Thomas B. Schön March 18, 216 Good luck! 1 [Bootstrap particle filter for
More informationNonlinear and/or Non-normal Filtering. Jesús Fernández-Villaverde University of Pennsylvania
Nonlinear and/or Non-normal Filtering Jesús Fernández-Villaverde University of Pennsylvania 1 Motivation Nonlinear and/or non-gaussian filtering, smoothing, and forecasting (NLGF) problems are pervasive
More informationMarkov Chain Monte Carlo (MCMC)
Markov Chain Monte Carlo (MCMC Dependent Sampling Suppose we wish to sample from a density π, and we can evaluate π as a function but have no means to directly generate a sample. Rejection sampling can
More informationdans les modèles à vraisemblance non explicite par des algorithmes gradient-proximaux perturbés
Inférence pénalisée dans les modèles à vraisemblance non explicite par des algorithmes gradient-proximaux perturbés Gersende Fort Institut de Mathématiques de Toulouse, CNRS and Univ. Paul Sabatier Toulouse,
More informationAn introduction to Approximate Bayesian Computation methods
An introduction to Approximate Bayesian Computation methods M.E. Castellanos maria.castellanos@urjc.es (from several works with S. Cabras, E. Ruli and O. Ratmann) Valencia, January 28, 2015 Valencia Bayesian
More informationLecture 2: From Linear Regression to Kalman Filter and Beyond
Lecture 2: From Linear Regression to Kalman Filter and Beyond January 18, 2017 Contents 1 Batch and Recursive Estimation 2 Towards Bayesian Filtering 3 Kalman Filter and Bayesian Filtering and Smoothing
More informationIntroduction to Machine Learning CMU-10701
Introduction to Machine Learning CMU-10701 Markov Chain Monte Carlo Methods Barnabás Póczos & Aarti Singh Contents Markov Chain Monte Carlo Methods Goal & Motivation Sampling Rejection Importance Markov
More informationAn introduction to particle filters
An introduction to particle filters Andreas Svensson Department of Information Technology Uppsala University June 10, 2014 June 10, 2014, 1 / 16 Andreas Svensson - An introduction to particle filters Outline
More informationPatterns of Scalable Bayesian Inference Background (Session 1)
Patterns of Scalable Bayesian Inference Background (Session 1) Jerónimo Arenas-García Universidad Carlos III de Madrid jeronimo.arenas@gmail.com June 14, 2017 1 / 15 Motivation. Bayesian Learning principles
More informationNonparametric Drift Estimation for Stochastic Differential Equations
Nonparametric Drift Estimation for Stochastic Differential Equations Gareth Roberts 1 Department of Statistics University of Warwick Brazilian Bayesian meeting, March 2010 Joint work with O. Papaspiliopoulos,
More informationA Central Limit Theorem for Fleming-Viot Particle Systems Application to the Adaptive Multilevel Splitting Algorithm
A Central Limit Theorem for Fleming-Viot Particle Systems Application to the Algorithm F. Cérou 1,2 B. Delyon 2 A. Guyader 3 M. Rousset 1,2 1 Inria Rennes Bretagne Atlantique 2 IRMAR, Université de Rennes
More informationOn the Robustness of the Snell envelope
On the Robustness of the Snell envelope Pierre Del Moral, Peng Hu, adia Oudjane, Bruno Rémillard To cite this version: Pierre Del Moral, Peng Hu, adia Oudjane, Bruno Rémillard On the Robustness of the
More informationDensity Estimation. Seungjin Choi
Density Estimation Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr http://mlg.postech.ac.kr/
More informationRobert Collins CSE586, PSU Intro to Sampling Methods
Intro to Sampling Methods CSE586 Computer Vision II Penn State Univ Topics to be Covered Monte Carlo Integration Sampling and Expected Values Inverse Transform Sampling (CDF) Ancestral Sampling Rejection
More informationBayesian Inference and MCMC
Bayesian Inference and MCMC Aryan Arbabi Partly based on MCMC slides from CSC412 Fall 2018 1 / 18 Bayesian Inference - Motivation Consider we have a data set D = {x 1,..., x n }. E.g each x i can be the
More informationInteracting and Annealing Particle Filters: Mathematics and a Recipe for Applications. Christoph Schnörr Bodo Rosenhahn Hans-Peter Seidel
Interacting and Annealing Particle Filters: Mathematics and a Recipe for Applications Jürgen Gall Jürgen Potthoff Christoph Schnörr Bodo Rosenhahn Hans-Peter Seidel MPI I 2006 4 009 September 2006 Authors
More informationThe Particle Filter. PD Dr. Rudolph Triebel Computer Vision Group. Machine Learning for Computer Vision
The Particle Filter Non-parametric implementation of Bayes filter Represents the belief (posterior) random state samples. by a set of This representation is approximate. Can represent distributions that
More informationParticle Filtering for Data-Driven Simulation and Optimization
Particle Filtering for Data-Driven Simulation and Optimization John R. Birge The University of Chicago Booth School of Business Includes joint work with Nicholas Polson. JRBirge INFORMS Phoenix, October
More informationVariantes Algorithmiques et Justifications Théoriques
complément scientifique École Doctorale MATISSE IRISA et INRIA, salle Markov jeudi 26 janvier 2012 Variantes Algorithmiques et Justifications Théoriques François Le Gland INRIA Rennes et IRMAR http://www.irisa.fr/aspi/legland/ed-matisse/
More informationVariational Inference (11/04/13)
STA561: Probabilistic machine learning Variational Inference (11/04/13) Lecturer: Barbara Engelhardt Scribes: Matt Dickenson, Alireza Samany, Tracy Schifeling 1 Introduction In this lecture we will further
More informationInformation theoretic perspectives on learning algorithms
Information theoretic perspectives on learning algorithms Varun Jog University of Wisconsin - Madison Departments of ECE and Mathematics Shannon Channel Hangout! May 8, 2018 Jointly with Adrian Tovar-Lopez
More informationComputer Intensive Methods in Mathematical Statistics
Computer Intensive Methods in Mathematical Statistics Department of mathematics johawes@kth.se Lecture 5 Sequential Monte Carlo methods I 31 March 2017 Computer Intensive Methods (1) Plan of today s lecture
More informationn! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2
Order statistics Ex. 4. (*. Let independent variables X,..., X n have U(0, distribution. Show that for every x (0,, we have P ( X ( < x and P ( X (n > x as n. Ex. 4.2 (**. By using induction or otherwise,
More informationEconometrics I, Estimation
Econometrics I, Estimation Department of Economics Stanford University September, 2008 Part I Parameter, Estimator, Estimate A parametric is a feature of the population. An estimator is a function of the
More informationPseudo-marginal MCMC methods for inference in latent variable models
Pseudo-marginal MCMC methods for inference in latent variable models Arnaud Doucet Department of Statistics, Oxford University Joint work with George Deligiannidis (Oxford) & Mike Pitt (Kings) MCQMC, 19/08/2016
More informationUnsupervised Learning
Unsupervised Learning Bayesian Model Comparison Zoubin Ghahramani zoubin@gatsby.ucl.ac.uk Gatsby Computational Neuroscience Unit, and MSc in Intelligent Systems, Dept Computer Science University College
More informationLinear Dynamical Systems
Linear Dynamical Systems Sargur N. srihari@cedar.buffalo.edu Machine Learning Course: http://www.cedar.buffalo.edu/~srihari/cse574/index.html Two Models Described by Same Graph Latent variables Observations
More informationSmoothers: Types and Benchmarks
Smoothers: Types and Benchmarks Patrick N. Raanes Oxford University, NERSC 8th International EnKF Workshop May 27, 2013 Chris Farmer, Irene Moroz Laurent Bertino NERSC Geir Evensen Abstract Talk builds
More informationON COMPOUND POISSON POPULATION MODELS
ON COMPOUND POISSON POPULATION MODELS Martin Möhle, University of Tübingen (joint work with Thierry Huillet, Université de Cergy-Pontoise) Workshop on Probability, Population Genetics and Evolution Centre
More informationWinter 2019 Math 106 Topics in Applied Mathematics. Lecture 8: Importance Sampling
Winter 2019 Math 106 Topics in Applied Mathematics Data-driven Uncertainty Quantification Yoonsang Lee (yoonsang.lee@dartmouth.edu) Lecture 8: Importance Sampling 8.1 Importance Sampling Importance sampling
More informationPMR Learning as Inference
Outline PMR Learning as Inference Probabilistic Modelling and Reasoning Amos Storkey Modelling 2 The Exponential Family 3 Bayesian Sets School of Informatics, University of Edinburgh Amos Storkey PMR Learning
More informationStochastic Proximal Gradient Algorithm
Stochastic Institut Mines-Télécom / Telecom ParisTech / Laboratoire Traitement et Communication de l Information Joint work with: Y. Atchade, Ann Arbor, USA, G. Fort LTCI/Télécom Paristech and the kind
More informationRobustesse des techniques de Monte Carlo dans l analyse d événements rares
Institut National de Recherche en Informatique et Automatique Institut de Recherche en Informatique et Systèmes Aléatoires Robustesse des techniques de Monte Carlo dans l analyse d événements rares H.
More informationAsymptotics and Simulation of Heavy-Tailed Processes
Asymptotics and Simulation of Heavy-Tailed Processes Department of Mathematics Stockholm, Sweden Workshop on Heavy-tailed Distributions and Extreme Value Theory ISI Kolkata January 14-17, 2013 Outline
More informationWinter 2019 Math 106 Topics in Applied Mathematics. Lecture 1: Introduction
Winter 2019 Math 106 Topics in Applied Mathematics Data-driven Uncertainty Quantification Yoonsang Lee (yoonsang.lee@dartmouth.edu) Lecture 1: Introduction 19 Winter M106 Class: MWF 12:50-1:55 pm @ 200
More informationInference in state-space models with multiple paths from conditional SMC
Inference in state-space models with multiple paths from conditional SMC Sinan Yıldırım (Sabancı) joint work with Christophe Andrieu (Bristol), Arnaud Doucet (Oxford) and Nicolas Chopin (ENSAE) September
More informationLecture 2: From Linear Regression to Kalman Filter and Beyond
Lecture 2: From Linear Regression to Kalman Filter and Beyond Department of Biomedical Engineering and Computational Science Aalto University January 26, 2012 Contents 1 Batch and Recursive Estimation
More informationMonte Carlo methods for sampling-based Stochastic Optimization
Monte Carlo methods for sampling-based Stochastic Optimization Gersende FORT LTCI CNRS & Telecom ParisTech Paris, France Joint works with B. Jourdain, T. Lelièvre, G. Stoltz from ENPC and E. Kuhn from
More informationPattern Recognition and Machine Learning. Bishop Chapter 2: Probability Distributions
Pattern Recognition and Machine Learning Chapter 2: Probability Distributions Cécile Amblard Alex Kläser Jakob Verbeek October 11, 27 Probability Distributions: General Density Estimation: given a finite
More informationAsymptotic Nonequivalence of Nonparametric Experiments When the Smoothness Index is ½
University of Pennsylvania ScholarlyCommons Statistics Papers Wharton Faculty Research 1998 Asymptotic Nonequivalence of Nonparametric Experiments When the Smoothness Index is ½ Lawrence D. Brown University
More informationComplexity of stochastic branch and bound methods for belief tree search in Bayesian reinforcement learning
Complexity of stochastic branch and bound methods for belief tree search in Bayesian reinforcement learning Christos Dimitrakakis Informatics Institute, University of Amsterdam, Amsterdam, The Netherlands
More informationOnline appendix to On the stability of the excess sensitivity of aggregate consumption growth in the US
Online appendix to On the stability of the excess sensitivity of aggregate consumption growth in the US Gerdie Everaert 1, Lorenzo Pozzi 2, and Ruben Schoonackers 3 1 Ghent University & SHERPPA 2 Erasmus
More informationUsing systematic sampling for approximating Feynman-Kac solutions by Monte Carlo methods
Using systematic sampling for approximating Feynman-Kac solutions by Monte Carlo methods Ivan Gentil and Bruno Rémillard Abstract While convergence properties of many sampling selection methods can be
More informationAnnealing Between Distributions by Averaging Moments
Annealing Between Distributions by Averaging Moments Chris J. Maddison Dept. of Comp. Sci. University of Toronto Roger Grosse CSAIL MIT Ruslan Salakhutdinov University of Toronto Partition Functions We
More informationCalibration of Stochastic Volatility Models using Particle Markov Chain Monte Carlo Methods
Calibration of Stochastic Volatility Models using Particle Markov Chain Monte Carlo Methods Jonas Hallgren 1 1 Department of Mathematics KTH Royal Institute of Technology Stockholm, Sweden BFS 2012 June
More informationA Conditional Approach to Modeling Multivariate Extremes
A Approach to ing Multivariate Extremes By Heffernan & Tawn Department of Statistics Purdue University s April 30, 2014 Outline s s Multivariate Extremes s A central aim of multivariate extremes is trying
More informationSequential Monte Carlo methods for system identification
Technical report arxiv:1503.06058v3 [stat.co] 10 Mar 2016 Sequential Monte Carlo methods for system identification Thomas B. Schön, Fredrik Lindsten, Johan Dahlin, Johan Wågberg, Christian A. Naesseth,
More informationStatistical Inference and Methods
Department of Mathematics Imperial College London d.stephens@imperial.ac.uk http://stats.ma.ic.ac.uk/ das01/ 31st January 2006 Part VI Session 6: Filtering and Time to Event Data Session 6: Filtering and
More informationMinimizing D(Q,P) def = Q(h)
Inference Lecture 20: Variational Metods Kevin Murpy 29 November 2004 Inference means computing P( i v), were are te idden variables v are te visible variables. For discrete (eg binary) idden nodes, exact
More informationAverage-cost temporal difference learning and adaptive control variates
Average-cost temporal difference learning and adaptive control variates Sean Meyn Department of ECE and the Coordinated Science Laboratory Joint work with S. Mannor, McGill V. Tadic, Sheffield S. Henderson,
More informationECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering
ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering Lecturer: Nikolay Atanasov: natanasov@ucsd.edu Teaching Assistants: Siwei Guo: s9guo@eng.ucsd.edu Anwesan Pal:
More information