Inverse Langevin approach to time-series data analysis

Similar documents
The Smoluchowski-Kramers Approximation: What model describes a Brownian particle?

Langevin Methods. Burkhard Dünweg Max Planck Institute for Polymer Research Ackermannweg 10 D Mainz Germany

A path integral approach to the Langevin equation

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Introduction to nonequilibrium physics

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Table of Contents [ntc]

16. Working with the Langevin and Fokker-Planck equations

Local vs. Nonlocal Diffusions A Tale of Two Laplacians

STOCHASTIC PROCESSES IN PHYSICS AND CHEMISTRY

Handbook of Stochastic Methods

1 Introduction. 2 Diffusion equation and central limit theorem. The content of these notes is also covered by chapter 3 section B of [1].

Lecture 4: Introduction to stochastic processes and stochastic calculus

The Kramers problem and first passage times.

1/f Fluctuations from the Microscopic Herding Model

Bayesian Regression Linear and Logistic Regression

Session 1: Probability and Markov chains

Gaussian processes for inference in stochastic differential equations

Pattern Recognition and Machine Learning. Bishop Chapter 2: Probability Distributions

Likelihood-free MCMC

Introduction to Systems Analysis and Decision Making Prepared by: Jakub Tomczak

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Homogenization with stochastic differential equations

Markov Chain Monte Carlo

Diffusion in the cell

Lecture 6: Bayesian Inference in SDE Models

Riemann Manifold Methods in Bayesian Statistics

Unsupervised Learning

This is a Gaussian probability centered around m = 0 (the most probable and mean position is the origin) and the mean square displacement m 2 = n,or

Notes on pseudo-marginal methods, variational Bayes and ABC

Different approaches to model wind speed based on stochastic differential equations

Handbook of Stochastic Methods

Nonparametric Drift Estimation for Stochastic Differential Equations

08. Brownian Motion. University of Rhode Island. Gerhard Müller University of Rhode Island,

CLASS NOTES Models, Algorithms and Data: Introduction to computing 2018

Gaussian Process Approximations of Stochastic Differential Equations

Hierarchical models. Dr. Jarad Niemi. August 31, Iowa State University. Jarad Niemi (Iowa State) Hierarchical models August 31, / 31

State-Space Methods for Inferring Spike Trains from Calcium Imaging

macroscopic view (phenomenological) microscopic view (atomistic) computing reaction rate rate of reactions experiments thermodynamics

Lecture on Parameter Estimation for Stochastic Differential Equations. Erik Lindström

New Physical Principle for Monte-Carlo simulations

Brownian Motion and Langevin Equations

Graphical Models for Collaborative Filtering

Patterns of Scalable Bayesian Inference Background (Session 1)

2012 NCTS Workshop on Dynamical Systems

Statistical Mechanics of Active Matter

Applied Probability and Stochastic Processes

Density Estimation. Seungjin Choi

Linear Systems Theory

Anomalous Lévy diffusion: From the flight of an albatross to optical lattices. Eric Lutz Abteilung für Quantenphysik, Universität Ulm

Introduction to Algorithmic Trading Strategies Lecture 4

Lecture 21: Physical Brownian Motion II

Brownian Motion. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Brownian Motion

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 197

Nonparametric Bayesian Methods (Gaussian Processes)

GAUSSIAN PROCESS REGRESSION

Foundations of Statistical Inference

Sequential Monte Carlo and Particle Filtering. Frank Wood Gatsby, November 2007

Introduction to Bayesian Inference

Probabilistic Graphical Models

Some Tools From Stochastic Analysis

DUBLIN CITY UNIVERSITY

Bayesian inference. Fredrik Ronquist and Peter Beerli. October 3, 2007

σ(a) = a N (x; 0, 1 2 ) dx. σ(a) = Φ(a) =

Exercises Tutorial at ICASSP 2016 Learning Nonlinear Dynamical Models Using Particle Filters

Sloppy derivations of Ito s formula and the Fokker-Planck equations

Non-Parametric Bayes

State Space and Hidden Markov Models

Learning Bayesian network : Given structure and completely observed data

Stochastic (intermittent) Spikes and Strong Noise Limit of SDEs.

Bayesian Machine Learning

Variational Scoring of Graphical Model Structures

From Random Numbers to Monte Carlo. Random Numbers, Random Walks, Diffusion, Monte Carlo integration, and all that

A thorough derivation of back-propagation for people who really want to understand it by: Mike Gashler, September 2010

10. Composite Hypothesis Testing. ECE 830, Spring 2014

Hypothesis Testing. Econ 690. Purdue University. Justin L. Tobias (Purdue) Testing 1 / 33

Statistical learning. Chapter 20, Sections 1 4 1

Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics. Jiti Gao

Stat 451 Lecture Notes Markov Chain Monte Carlo. Ryan Martin UIC

Sparse Bayesian Logistic Regression with Hierarchical Prior and Variational Inference

Manifold Monte Carlo Methods

Lecture 1: Pragmatic Introduction to Stochastic Differential Equations

Linear Models A linear model is defined by the expression

David Giles Bayesian Econometrics

Stat 451 Lecture Notes Numerical Integration

5 Applying the Fokker-Planck equation

A brief introduction to Conditional Random Fields

Physics 403. Segev BenZvi. Parameter Estimation, Correlations, and Error Bars. Department of Physics and Astronomy University of Rochester

Bayesian RL Seminar. Chris Mansley September 9, 2008

Part 1: Expectation Propagation

CPSC 540: Machine Learning

Stochastic Modelling in Climate Science

Lecture 10: Powers of Matrices, Difference Equations

Auxiliary Particle Methods

Particle Filtering a brief introductory tutorial. Frank Wood Gatsby, August 2007

Computer Intensive Methods in Mathematical Statistics

ComputationalToolsforComparing AsymmetricGARCHModelsviaBayes Factors. RicardoS.Ehlers

GWAS IV: Bayesian linear (variance component) models

Remarks on Improper Ignorance Priors

Path integrals for classical Markov processes

Transcription:

Inverse Langevin approach to time-series data analysis Fábio Macêdo Mendes Anníbal Dias Figueiredo Neto Universidade de Brasília Saratoga Springs, MaxEnt 2007

Outline 1 Motivation 2

Outline 1 Motivation 2

Classic Brownian motion Einstein: drunkard walk (Smoluchowski controversy) ) Lots of Brownian motions all around: physics, finance, communication etc. Random Gaussian increments (SDE s Wanier, Itô) dx = m(x; t) dt + σ(x; t) db

Langevin dynamics We must retain Newtonian physics: expand the state dv = a(x, v; t) dt + D(x, v; t) db dx = v dt rich set of solutions from simple equations: linear, exponential, oscillatory, etc Similar results (classic Brownian)

Outline 1 Motivation 2

What we tackle (work in progress) Wanier noise db = β dt: Fokker-Planck/forward Kolmogorov equation. (Brownian motion) Non-analytic noises, Levy, and others: Kramer-Moyal equation. (gas of rigid spheres)

Questions What the drift and the noise?

Questions What the drift and the noise? Is it Fokker-Planck or Kramer-Moyal?

Questions What the drift and the noise? Is it Fokker-Planck or Kramer-Moyal? Can we compare Fokker-Planck to Kramer-Moyal models?

Outline 1 Motivation 2

Markov Chain

Use Bayes theorem... Likelihood (µ: measurement noise, w: hidden state, η, θ: parameters of interest) p(η y) = 1 p( y) p(η)p( y η) p( y η) = d w dµ dθ p(η, θ)p(µ)p( y w, µ)p( w η, θ) Similar thing for p(θ y)

Outline 1 Motivation 2

Exactly soluble propagators A Markov process is characterized by its propagator p(w 0, w 1,..., w N ) = p(w 0 )p(w 1 w 0 )... p(w N w N 1 ) We know analytical (Gaussians!) results only for simplest cases F(x, v; t) m = γv ω 2 x + a 0

The parabolic method We don t know how to integrate every model Newton approximation The force at a small δt is almost constant (but depends on initial position + parameters) We can calculate the trajectory/propagator Repeat this procedure for the next step

Outline 1 Motivation 2

Delta function approximation Conjugate prior for p(µ). At some level approximation for large data sets... dµ p(µ)p( y w, µ) δ( y x) Now we have only Gaussian integrations over velocities. Φ = d v p(y 0, v 0, y 1, v 1,..., y N, v N, θ, η)

Main loop At each integration step, collect a bunch of coefficients η α i G i e η 2 (a i v 2 i +2b iv i +2c i v i v i 1 +d i v 2 i 1 +2e iv i 1 +f i) After each step, some coefficients must be updated before we start with the next integration Φ = d v p( y, v θ, η) = η N 1 2 η G(θ) e 2 f (θ, y)

Inference results Use Laplace approximation to normalize p(θ y, η), plug-in a Gamma prior p(η) p(θ y) p(θ) G(θ) [ f ( y) + f (θ, y) f ( θ, y) + δ ] N+1 2 +σ+ d 2 p(η y) is calculated analytically

Inference results Use Laplace approximation to normalize p(θ y, η), plug-in a Gamma prior p(η) p(θ y) p(θ) G(θ) [ f ( y) + f (θ, y) f ( θ, y) + δ ] N+1 2 +σ+ d 2 p(η y) is calculated analytically We use a flat prior p(θ) (I m ashamed!)

Outline 1 Motivation 2

Oscillatory brownian motion Force parameters vs. data length constant linear quadratic 0.05 0.00 Coefficient -0.05-0.10-0.15-0.20 10 3 10 4 10 5 Data length

Comments Works pretty well with artificially generated time series Still, it does not get the diffusion coefficient quite right in some cases It is also somewhat robust to the existence of dissipative forces

Some results η(1) θ 1 (0) θ 2 (0.1) Simple 0.96(0.03) 0.07(0.02) 9 10 7 (5 10 4 ) Harmonic 0.78(0.03) 0.04(0.08) 0.099(0.003) Dissipative 1.06(0.03) 0.08(0.02) 5 10 4 (9 10 4 ) D-H 1.07(0.03) 0.04(0.08) 0.096(0.004)

Outline 1 Motivation 2

You will not win any money! There is no force. This will NOT make you win money!

You will not win any money! There is no force. This will NOT make you win money! But this is the expected behavior, of course Things to explore (speculation-crash patterns): noise may not be Wanier volatility time we can introduce trends as a time-dependent force

The time series data GE closing values 1.0 0.5 log2-return 0.0-0.5-1.0-1.5 1970 1980 1990 2000 year

Given f (x, θ), it is possible to infer θ and the intensity of the noise which governs a Langevin system

Given f (x, θ), it is possible to infer θ and the intensity of the noise which governs a Langevin system Open possibilities: better linear approximations, time-dependent parameters, linear diffusion coefficient.

Given f (x, θ), it is possible to infer θ and the intensity of the noise which governs a Langevin system Open possibilities: better linear approximations, time-dependent parameters, linear diffusion coefficient. Open issues Work out a decent prior p(θ) (and calculate the evidence!)

Given f (x, θ), it is possible to infer θ and the intensity of the noise which governs a Langevin system Open possibilities: better linear approximations, time-dependent parameters, linear diffusion coefficient. Open issues Work out a decent prior p(θ) (and calculate the evidence!) Treatment of noise is not really satisfactory

Given f (x, θ), it is possible to infer θ and the intensity of the noise which governs a Langevin system Open possibilities: better linear approximations, time-dependent parameters, linear diffusion coefficient. Open issues Work out a decent prior p(θ) (and calculate the evidence!) Treatment of noise is not really satisfactory Dissipative forces (easy) and non-wanier and Levy noises (potentially very hard)