Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Similar documents
p 1 ( Y p dp) 1/p ( X p dp) 1 1 p

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt.

Stochastic Calculus February 11, / 33

1. Stochastic Process

Stochastic Differential Equations.

Stochastic Differential Equations

Lecture 4: Introduction to stochastic processes and stochastic calculus

A Concise Course on Stochastic Partial Differential Equations

I forgot to mention last time: in the Ito formula for two standard processes, putting

Kolmogorov Equations and Markov Processes

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

Exercises. T 2T. e ita φ(t)dt.

Discretization of SDEs: Euler Methods and Beyond

HJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011

Theoretical Tutorial Session 2

Poisson Jumps in Credit Risk Modeling: a Partial Integro-differential Equation Formulation

2012 NCTS Workshop on Dynamical Systems

Introduction to Random Diffusions

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Stability of Stochastic Differential Equations

The Smoluchowski-Kramers Approximation: What model describes a Brownian particle?

Lecture 4: Ito s Stochastic Calculus and SDE. Seung Yeal Ha Dept of Mathematical Sciences Seoul National University

Nonlinear representation, backward SDEs, and application to the Principal-Agent problem

Controlled Diffusions and Hamilton-Jacobi Bellman Equations

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

Introduction. Stochastic Processes. Will Penny. Stochastic Differential Equations. Stochastic Chain Rule. Expectations.

Stochastic differential equation models in biology Susanne Ditlevsen

On continuous time contract theory

Interest Rate Models:

STATISTICS 385: STOCHASTIC CALCULUS HOMEWORK ASSIGNMENT 4 DUE NOVEMBER 23, = (2n 1)(2n 3) 3 1.

A Short Introduction to Diffusion Processes and Ito Calculus

Homogenization with stochastic differential equations

Partial Differential Equations with Applications to Finance Seminar 1: Proving and applying Dynkin s formula

Kolmogorov equations in Hilbert spaces IV

Walsh Diffusions. Andrey Sarantsev. March 27, University of California, Santa Barbara. Andrey Sarantsev University of Washington, Seattle 1 / 1

Gaussian processes for inference in stochastic differential equations

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Convergence of Particle Filtering Method for Nonlinear Estimation of Vortex Dynamics

Elliptic Operators with Unbounded Coefficients

Discretization of Stochastic Differential Systems With Singular Coefficients Part II

Stochastic differential equations in neuroscience

MA8109 Stochastic Processes in Systems Theory Autumn 2013

Dynamical systems with Gaussian and Levy noise: analytical and stochastic approaches

Exact Simulation of Diffusions and Jump Diffusions

On a class of stochastic differential equations in a financial network model

Continuous dependence estimates for the ergodic problem with an application to homogenization

Bernardo D Auria Stochastic Processes /12. Notes. March 29 th, 2012

lim n C1/n n := ρ. [f(y) f(x)], y x =1 [f(x) f(y)] [g(x) g(y)]. (x,y) E A E(f, f),

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 197

Stochastic Areas and Applications in Risk Theory

1. Stochastic Processes and filtrations

The Wiener Itô Chaos Expansion

Lecture 12: Diffusion Processes and Stochastic Differential Equations

Regularization by noise in infinite dimensions

Some Tools From Stochastic Analysis

Albert N. Shiryaev Steklov Mathematical Institute. On sharp maximal inequalities for stochastic processes

Backward martingale representation and endogenous completeness in finance

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have

On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem

Brownian Motion. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Brownian Motion

Lecture 21 Representations of Martingales

Week 9 Generators, duality, change of measure

Branching Processes II: Convergence of critical branching to Feller s CSB

Bridging the Gap between Center and Tail for Multiscale Processes

Solutions to the Exercises in Stochastic Analysis

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals

WEYL S LEMMA, ONE OF MANY. Daniel W. Stroock

Synchrony in Stochastic Pulse-coupled Neuronal Network Models

Uniformly Uniformly-ergodic Markov chains and BSDEs

Stochastic Differential Equations

Bernardo D Auria Stochastic Processes /10. Notes. Abril 13 th, 2010

Properties of an infinite dimensional EDS system : the Muller s ratchet

Weak solutions of mean-field stochastic differential equations

Stochastic contraction BACS Workshop Chamonix, January 14, 2008

Most Probable Escape Path Method and its Application to Leaky Integrate And Fire Neurons

STOCHASTIC CALCULUS JASON MILLER AND VITTORIA SILVESTRI

Weak convergence and large deviation theory

Lower Tail Probabilities and Normal Comparison Inequalities. In Memory of Wenbo V. Li s Contributions

Lecture 12. F o s, (1.1) F t := s>t

Simulation methods for stochastic models in chemistry

Stochastic Calculus. Kevin Sinclair. August 2, 2016

Example 4.1 Let X be a random variable and f(t) a given function of time. Then. Y (t) = f(t)x. Y (t) = X sin(ωt + δ)

Reflected Brownian Motion

1 Brownian Local Time

ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME

SMSTC (2007/08) Probability.

A Lévy-Fokker-Planck equation: entropies and convergence to equilibrium

Exact Simulation of Multivariate Itô Diffusions

LAN property for sde s with additive fractional noise and continuous time observation

Multilevel Monte Carlo for Stochastic McKean-Vlasov Equations

(2m)-TH MEAN BEHAVIOR OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS UNDER PARAMETRIC PERTURBATIONS

Optimal Stopping and Maximal Inequalities for Poisson Processes

First passage time for Brownian motion and piecewise linear boundaries

Annealed Brownian motion in a heavy tailed Poissonian potential

Quantifying Intermittent Transport in Cell Cytoplasm

MATH 425, HOMEWORK 3 SOLUTIONS

Introduction to multiscale modeling and simulation. Explicit methods for ODEs : forward Euler. y n+1 = y n + tf(y n ) dy dt = f(y), y(0) = y 0

Simulation of conditional diffusions via forward-reverse stochastic representations

Transcription:

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca November 26th, 2014 E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences November 26th, 2014 1 / 34

Outline 1 Motivation 2 Complements 3 Stochastic Differential Equations 4 Link between SDE and PDE 5 Approximation of Solutions 6 Noisy Integrate and Fire Models 7 Complement: Point Poisson Processes E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences November 26th, 2014 2 / 34

Outline 1 Motivation 2 Complements 3 Stochastic Differential Equations 4 Link between SDE and PDE 5 Approximation of Solutions 6 Noisy Integrate and Fire Models 7 Complement: Point Poisson Processes E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences November 26th, 2014 3 / 34

Motivation First models of individual neurons Simple Integrate-and-Fire (SIF) Model Leaky Integrate-and-Fire (LIF) Model E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences November 26th, 2014 4 / 34

A first model of single neuron The integrate-and-fire neuronal model was introduced by Lapicque in 1907. The membrane equation where I L (V ): the leak current I syn(v, t): the synaptic current I ext(t): the external current C: membrane capacitance dv (t) C = I L (V ) + I syn (V, t) + I ext (t), dt A spike response is generated whenever the membrane potential reaches a fixed threshold V th. After the spike, V is reset to a fixed value V reset. E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences November 26th, 2014 5 / 34

Two simple examples Simple integrate-and-fire model The simplest form: the IF neuron has no leak current i.e. I L (V ) = 0. Leaky integrate-and-fire model I L (V ) = g L (V (t) E L ), where g L : the leak conductance E L : resting potential E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences November 26th, 2014 6 / 34

Synaptic current First simple model of synaptic transmission An instantaneous rise or fall of the synaptic current at arrival of a presynaptic spike The PSC is described by a delta function with amplitude of efficiency J The total synaptic current stemming from N sym synaptic input channels takes the form N syn I syn (V, t) = I syn (t) = τ m J δ(t ti k ), where τ m = C. g L Asymptotic behaviour i=1 Assume the neuron receives a high barrage of Poissonian distributed and uncorrelated synaptic inputs Assume the amplitude J is small, i.e. k J << V th E L E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences November 26th, 2014 7 / 34

Asymptotic behaviour Current I syn (t)dt µdt + σ τ m dw t, where µ = τ m JN syn ν syn σ 2 = τ m J 2 N syn ν syn where ν syn is the mean activation rate of each synapse. A continuous time limit equation τ m dv (t) = f (V (t))dt + I ext (t)dt + µdt + σ τ m dw t. E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences November 26th, 2014 8 / 34

Simple examples f (V ) = 0, I ext = 0 τ m dv (t) = µdt + σ τ m dw t. The potential V evolves as a Brownian Motion with constant drift. f (V ) = V, I ext = 0 τ m dv (t) = (µ V (t))dt + σ τ m dw t. The potential V evolves as an Ornstein Uhlenbeck process. Spiking times Recall that the considered neuron emits a spike at each time τ its potential hits threshold V th From a mathematical viewpoint, the spiking times are the first hitting time of constant threshold by a stochastic process τ = inf {t > 0, V (t) V th } E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences November 26th, 2014 9 / 34

Outline 1 Motivation 2 Complements 3 Stochastic Differential Equations 4 Link between SDE and PDE 5 Approximation of Solutions 6 Noisy Integrate and Fire Models 7 Complement: Point Poisson Processes E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences November 26th, 2014 10 / 34

Levy s Characterization of Brownian Motion Theorem A stochastic process X = (X t ) t 0 is a standard Brownian Motion if and only if it is a continuous local martingale with [X] t = t. Theorem (Multi-dimensional Version) Let X = (X 1 t,, X n t ) t 0 be continuous local martingales such that [X i, X j ] t = tδ i,j. Then X is a standard n-dimensional Brownian Motion. Remark Condition in previous theorems are obviously characterization of Brownian motions. E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences November 26th, 2014 11 / 34

Itô Isometry Proposition Let H be an adapted process such that Then, ( T E 0 T 0 E ( H 2 s ) ds <. ) 2 T H s dw S = E ( H 2 ) s ds 0 E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences November 26th, 2014 12 / 34

Change of time Theorem (Dubins-Schwartz) Let M be a continuous locale Martingale such that M 0 = 0 a.s., and [M] =. We set τ s = inf{t 0, [M] t s}, then B s = M τs is an F τs -Brownian Motion with M t = B [M]t Remark The Dubins-Schwarz theorem, which shows that continuous martingales with unbounded (as time goes to infinity) quadratic variation ARE Brownian Motion, up to a (stochastic) time change. E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences November 26th, 2014 13 / 34

Outline 1 Motivation 2 Complements 3 Stochastic Differential Equations 4 Link between SDE and PDE 5 Approximation of Solutions 6 Noisy Integrate and Fire Models 7 Complement: Point Poisson Processes E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences November 26th, 2014 14 / 34

Stochastic Differential Equations dx s = b(s, X s )ds + σ(s, X s )dw s, where b and σ are predictable. Solution t t X t = X 0 + b(s, X s )ds + σ(s, X s )dw s, 0 0 E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences November 26th, 2014 15 / 34

Strong Solutions A strong solution of the SDE on the given probability space (Ω, F, F, P) and with respect to the fixed F Brownian motion (W t ) t 0 and initial condition ξ is a process (X t ) t 0 with continuous sample paths and with the following properties X is adapted to the filtration F P(X 0 = ξ) = 1 for every t 0 holds almost surely. ( t ) P b(s, X s ) + σ 2 (s, X s )ds < = 1 0 t t X t = X 0 + b(s, X s )ds + σ(s, X s )dw s 0 0 E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences November 26th, 2014 16 / 34

Weak Solutions A weak solution to SDE is a triple (X, W ), (ω, F, P),F where (Ω, F, P) is a probability space, and F is a filtration of sub-σ-fields of F satisfying the usual conditions. X is a continuous, F-adapted stochastic process W is an F-Brownian motion for every t 0 holds almost surely. ( t ) P b(s, X s ) + σ 2 (s, X s )ds < = 1 0 t t X t = X 0 + b(s, X s )ds + σ(s, X s )dw s 0 0 E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences November 26th, 2014 17 / 34

Strong Existence and Uniqueness Theorem Let T > 0 and b(.,.) : [0, T ] R n R n and σ(.,.) : [0, T ] R n R n m be measurable functions satisfying b(t, x) + σ(t, x) C(1 + x ); x R n, t [0, T ] for some constant C and such that b(t, x) b(t, y) + σ(t, x) σ(t, y) D x y ; x, y R n, t [0, T ]. Then the stochastic differential equation dx t = b(t, X t )dt + σ(t, X t )db t, X 0 = X 0 has a unique solution X, continuous in time, such that X is adapted to the filtration generated by X 0 and the Brownian Motion and [ ] T E X t 2 dt <. 0 E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences November 26th, 2014 18 / 34

Main Ideas of the Proof Uniqueness is obtained thanks to Gronwall Lemma. Existence: Picard iteration scheme. E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences November 26th, 2014 19 / 34

Infinitesimal Generator Associated to a Feller Process Let X be a Feller process; a function f in C 0 is said to belong to the domain D A of the infinitesimal generator of X if the limit Af (x) = lim t 0 E x (f (X t )) f (x) t exists in C 0. The operator A : D A C 0 is called the infinitesimal generator of the process X. Example (Diffusion Processes) The infinitesimal generator associated to the solution of a Stochastic Differential Equation dx t = b(x t )dt + σ(x t )dw t writes with Af (x) = i b i (x) x i f (x) + 1 2 a ij (x) = 2 a ij (x) f (x), x i x j i r σ ik (x)σ kj (x). k=1 E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences November 26th, 2014 20 / 34 j

Outline 1 Motivation 2 Complements 3 Stochastic Differential Equations 4 Link between SDE and PDE 5 Approximation of Solutions 6 Noisy Integrate and Fire Models 7 Complement: Point Poisson Processes E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences November 26th, 2014 21 / 34

Kolmogorov s backward equation Theorem Let f C 2 0 (Rn ). Define then u(t,.) D A for each t and u(t, x) = E x [f (X t )] u t = Au, t > 0, x Rn u(0, x) = f (x), x R n. where the right hand side is to be interpreted as A applied to the function x u(t, x). E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences November 26th, 2014 22 / 34

Feynman-Kac formula Theorem Let f C 2 0 (Rn ) and q C(R n ). Assume that q is lower bounded. Put ( t ) ] v(t, x) = E [exp x q(x s )ds f (X t ). 0 v = Av qv, t > 0, x Rn t v(0, x) = f (x), x R n. E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences November 26th, 2014 23 / 34

Fokker Planck Equation Theorem Let X be an Itô diffusion in R n, solution of the stochastic differential equation dx t = b(x t )dt + σ(x t )dw t. Assume that P x (X t dy) = Γ(t, x, y)dy, for all x R n, t > 0. Assume that y Γ(t, x, y) is smooth for each t and x. Then, Γ satisfies the Kolmogorov forward equation (also known as Fokker Planck eq.) d dt Γ(t, x, y) = A Γ(t, x, y), where A is the adjoint of the operator A A φ(y) = i y i (b i (y)φ(y)) + 1 2 i j 2 y i y j [a ij (y)φ(y)]. Here a = σσ t, that is a ij (x) = r k=1 σ ik(x)σ kj (x). E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences November 26th, 2014 24 / 34

Backward Kolmogorov Equation Theorem Let X be a solution of the Stochastic Differential Equation dx t = b(x t )dt + σ(x t )dw t. Denote Γ(t, x, y) = P(X t = y X 0 = x). Then, Γ Γ (t, x, y) = b(x) t x (t, x, y) + 1 2 a(x) 2 Γ (t, x, y) x 2 = AΓ(t, x, y), where the infinitesimal operator A acts here on the variable x and a = σσ t. E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences November 26th, 2014 25 / 34

Outline 1 Motivation 2 Complements 3 Stochastic Differential Equations 4 Link between SDE and PDE 5 Approximation of Solutions 6 Noisy Integrate and Fire Models 7 Complement: Point Poisson Processes E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences November 26th, 2014 26 / 34

Approximation of Solutions Euler scheme X δ 0 = X 0 X δ (k+1)δ = X δ kδ + b( X δ kδ)δ + σ( X δ kδ)(w (k+1)δ W kδ ) Theorem The numerical scheme is strongly convergent lim E ( X T X T δ ) = 0. δ 0 The numerical scheme is weakly convergent lim Eg(XT ) Eg( X T δ ) = 0. δ 0 E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences November 26th, 2014 27 / 34

Rate of convergence Theorem Assume b and σ are C 4 functions with bounded derivatives, then E ( X T X T δ ) C T δ 1/2. Eg(X T ) Eg( X T δ ) C T δ. Remark: Romberg extrapolation Assume, we have an expansion of the error Eg(X T ) Eg( X T δ ) = CT 1 δ + CT 2 δ 2 + O(δ 3 ). Then a well chosen combination of X δ and X δ/2 gives an order 2 scheme. E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences November 26th, 2014 28 / 34

Outline 1 Motivation 2 Complements 3 Stochastic Differential Equations 4 Link between SDE and PDE 5 Approximation of Solutions 6 Noisy Integrate and Fire Models 7 Complement: Point Poisson Processes E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences November 26th, 2014 29 / 34

Simple Integrate-and-Fire Model Special case I ext = 0 dv t = σdw t. Exercise: Law of the first hitting time Compute the law of ( P (τ a t) = P sup 0 s t τ a := inf {t > 0, W t a}. ) W s a ( ) ( ) = P sup 0 s t W s a, W t a ( + P sup 0 s t W s a, W t < a ) = P (W t a) + P sup W s a, W t W τa < 0 0 s t ( ) = P (W t a) + P sup 0 s t W s a, W t W τa > 0 = 2P (W t a) E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences November 26th, 2014 30 / 34

Ornstein Uhlenbeck Process dv t = λ(v V t )dt + σdw t V 0 = v Solve explicitly the equation Give the law of V t (the conditional law given V 0 ) Make explicit the associated stationary measure. E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences November 26th, 2014 31 / 34

Outline 1 Motivation 2 Complements 3 Stochastic Differential Equations 4 Link between SDE and PDE 5 Approximation of Solutions 6 Noisy Integrate and Fire Models 7 Complement: Point Poisson Processes E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences November 26th, 2014 32 / 34

Point Poisson Process (P.P.P.) Let us consider a set D (e.g. [0, T ] [0, K]). A realisation of a P.P.P. on D with intensity I(t, x) is a set of points of D. For all subset F of D, we denote by N F the number of points of the P.P.P. which are in F. Characterization of a Point Poisson Process for all F D, N F is a random variable (with value in N) with Poisson law of parameter I(t, x)dtdx, F for all subset F and G with empty intersection, N F and N G are independent random variables. A particular case is: the intensity I is equal to 1. For all subset F of D, the number N F of points of the P.P.P. in F is a Poisson random variable with parameter equal to the volume of F. E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences November 26th, 2014 33 / 34

Point Poisson Process Recall that a Poisson random variable X of parameter λ has the following law: P(X = k) = exp( λ) λk k!. In particular, P(X = 0) = exp( λ). Let us consider a process (X t ; 0 t T ) and a non negative function φ, bounded from above by K. In order to simulate an event with probability exp( T 0 φ(x s)ds), we can use Point Poisson Processes. Indeed, the probability that the P.P.P. on [0, T ] [0, K] have no point below the curve t φ(x t ) is precisely exp( T 0 φ(x s)ds). E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences November 26th, 2014 34 / 34