Fokker-Planck Equation on Graph with Finite Vertices January 13, 2011 Jointly with S-N Chow (Georgia Tech) Wen Huang (USTC) Hao-min Zhou(Georgia Tech) Functional Inequalities and Discrete Spaces
Outline 1 Introduction 2 3 Theorem 1 - discrete Otto Calculus Theorem 2 - intepret noise 4 Geometry of the Metric Space Some Remarks 5 Fokker-Planck equation on a Graph
Preliminary State space: X measure on state space X : µ continuous state space: X - manifold with dimension N, dµ is absolutely continuous with the Legesgue measure density function ρ such that ρdx = 1 discrete state space: X - finite set {a 1,, a N }, dµδ-functions {ρ i } N i=1 = {ρ(a i)} N i=1 ; i = 1,, N denotes the probability density function on X = {a 1,, a N }
Free Energy Given a probability density function ρ on state space X, the free energy is F (ρ) = U(ρ) βs(ρ) U(ρ) is the potential energy { N i=1 U(ρ) = Ψ iρ i discrete R Ψ(x)ρ(x)dx continuous N S(ρ) is the entropy, { N i=1 S(ρ) = ρ i log ρ i discrete R ρ(x) log ρ(x)dx continuous N β is the temperature
Discrete Free Energy and Gibbs Distribution Free energy of probability density function ρ is defined on X = {a 1,, a N } as F (ρ) = U(ρ) βs(ρ) = N Ψ i ρ i + β i=1 N ρ i log ρ i Gibbs distribution is a unique global minimum of F (ρ), denoted as {ρ i }N i=1, with ρ i = 1 K e Ψ i /β Where K = N i=1 e Ψ i /β is a normalizer i=1
Free Energy and Gibbs Density on R N Free energy of probability density function ρ is defined on X = R N as F (ρ) = Ψ(x)ρ(x)dx + β R N ρ(x) log ρ(x)dx R N And the Gibbs distribution is a unique minimum of F (ρ), denoted as ρ (x) = 1 K e Ψ(x)/β where K = R e Ψ(x)/β dx is the normalizer N Remark The free energy is a constant plus the relative entropy with respect to the Gibbs measure
Fokker-Planck Equation (R N ) Given free energy F (ρ) of the density function ρ on R N, we have The Fokker-Planck equation is generated by the following differential operator P( ) = ( Ψ ) + β The semigroup generated by P is called Fokker-Planck equation ρ t = ( Ψρ) + β ρ (1) which provides the time evolution of the probability density function ρ(x, t) on X = R N
Two simple facts free energy decreases along the solution of Fokker-Planck equation Gibbs density is a stable stationary solution of Fokker-Planck equation Ṭhe decay rate of the free energy functional is of exponential Remark 2 If we replace Psi by some other vector field (under some assmuption), free energy still decreases along the solution of Fokker-Planck equation Remark 3 Fokker-Planck equation is also called forward Kolmogorov equation
Gradient Flow Given a gradient flow subject to white noise dx dt = Ψ(x), x RN dx dt = Ψ(x) + 2βdW t, x R N The time evolution of its probability density function satisfies equation (1), which can be vertified by Ito Calculus
2-Wasserstein Metric (R N ) X = R N, and µ 1, µ 2 are Borel probability measures, the 2-Wasserstein distance is Z d 2 (µ 1, µ 2) = inf x y 2 p(dxdy) p P(µ 1,µ 2 ) R N R N where P(µ 1, µ 2 ) is the collection of Borel probability measures on R n R n with marginals µ 1 and µ 2
Facts about 2-Wasserstein Metric (R N ) 1 Wasserstein distance comes from the optimal transport 2 Fokker-Planck equation (1), are the gradient flow of free energy on 2-Wasserstein space FOtto(1998), RJordan, DKinderlehrer, and FOtto(1998) 3 Fokker-Planck equation contracts the 2-Wasserstein distance F Otto, M Westdickenberg 4 Nonnegative Ricci curvature is equivalent to convexity of entropy on 2-Wasserstein space J Lott, C Villani, J McCann, Erausquin, Cordero, Schmuckenschlager, von Renesse, Sturm, etc 5 Logarithmic Sobolev inequality implies Talagrand inequality F Otto, C Villani 6 We can continue this list on and on
Our Motivation Remark Ạll the previous remarkable results require X to be a length space Extend? Can we extend a series of remarkable facts to discrete setting? Fact! The connection between Fokker-Planck equation and free energy is no longer true if we replace Fokker-Planck equation by discrete Laplace operator Two Different Approach We can discretize the manifold rough curvature We can also discretize the Otto s calculus our work
Our Motivation To analysis the Fokker-Planck equation on a graph, one should answer the following two questions what s 2-Wasserstein distance on graph how to interpret noise in a Markov process Actually, because of the topology on a graph is different from that on the length space, the previous two questions lead to two different Fokker-Planck equations!
Theorem 1 Introduction Given free energy F (ρ) = N i=1 Ψ iρ i + βρ i log ρ i on a graph G = (V, E) we have a Fokker-Planck equation dρ i dt = ((Ψ j + β log ρ j ) (Ψ i + β log ρ i ))ρ j (2) j N(i);Ψ j >Ψ i + ((Ψ j + β log ρ j ) (Ψ i + β log ρ i ))ρ i j N(i);Ψ i >Ψ j + β(ρ j ρ i ) j N(i);Ψ i =Ψ j equation (2) is the gradient flow of free energy on a Riemannian manifold (M, g) with distance d Ψ
Theorem 1 Gibbs density ρ is a stable stationary solution of equation (2) and is the global minimum of free energy Given any initial data ρ 0 M, we have a unique solution ρ(t) : [0, ) M with initial value ρ 0, and lim t ρ(t) = ρ The boundary of M is a repeller of equation (2)
Theorem 2 Introduction Given a graph G = (V, E), a gradient Markov process on graph G generated by potential {Ψ i } N i=1, suppose the process is subjected to white noise with strength β > 0 we have a Fokker-Planck equation (different from the equation in Theorem 1) dρ i dt = + where Ψ i = Ψ i + β log ρ i ((Ψ j + β log ρ j ) (Ψ i + β log ρ i ))ρ j j N(i); Ψ j > Ψ i ((Ψ j + β log ρ j ) (Ψ i + β log ρ i ))ρ i (3) j N(i); Ψ j > Ψ i
Theorem 2 Equation (3) is the gradient flow of free energy for metric space (M, d) Gibbs density ρ is a stable stationary solution of equation (3) and is the global minimum of free energy Given any initial data ρ 0 M, we have a unique solution ρ(t) : [0, ) M with initial value ρ 0, and lim t ρ(t) = ρ The boundary of M is a repeller of equation (3)
Remark 1 Both equations (2) and (3) are spatial discretizations of Fokker-Planck equation (R N ) based on upwind scheme Both equations are gradient flows of free energy, but on different spaces Gibbs density is a stable stationary solution of both equations
Remark 1 In both cases, the metrics are bounded by two different metrics d m and d M, which are independent of the potential All these metrics has some similarity with 2-Wassersyein distance Near the Gibbs distribution, the difference of the two equations are very small The reason why we have two different equations may be related to the fact that the graph is not a length space (See the lecture notes by JLott) Thanks to Prof WGangbo for the discussion!
2-Wasserstein Distance on Graph The distance d Ψ could be seen as a discrete version of 2-Wasserstein distance 1 From the derivation, d Ψ is the upwind discretization of the 2 Otto calculus The geodesic on (M, d Ψ ) is the upwind discretization of the geodesic on (R d, w 2 ) Remark In the interior of M, all the distances d Ψ, d, d m and d M are bounded by the Euclidean distance (see the next section)
A toy example Introduction Consider the following potential function: Figure: Potential Energy Discretization We make a discretization at five points {a 1 = 1, a 2 = 2, a 3 = 3, a 4 = 4, a 5 = 5} Noise We set the noise strength β = 08
compare - stationary distribution Central difference scheme Fokker-Planck equations Figure: Central Difference Figure: Fokker-Planck Equations
compare - free energy The free energy decreases in time if applying both of Fokker-Planck equations Figure: Equation in Theorem 1 Figure: Equation in Theorem 2
compare - free energy The free energy does not always decrease when evolving the equation obtained from the central difference scheme
Theorem 1 - discrete Otto Calculus Theorem 2 - intepret noise Basic Notation - The Graph The state space is a finite set V = {a 1, a 2, a N } On each a i V, we have the potential energy Ψ i We consider a graph G = (V, E), where V is the set of vertices, and E denotes the edges set Moveover, we always assume this is a connected simple graph (No multiple edges, no self-loop) N(i) denotes the collection of neighborhood o i: N(i) = {a j V {a i, a j } E}
Space of Probability Distributions Theorem 1 - discrete Otto Calculus Theorem 2 - intepret noise M denotes the space of probability distributions on V { } N M = {ρ i } N i=1 R N ρ i = 1; ρ i > 0 i=1 The tangent space of M at ρ is T ρ M = {{σ i } N i=1 R N N σ i = 0} M denotes the boundary of M { } N N M = {ρ i } N i=1 R N ρ i = 1; ρ i 0; ρ i = 0 i=1 i=1 i=1
The Identification Introduction Theorem 1 - discrete Otto Calculus Theorem 2 - intepret noise Let W be the vector space: W = {{p i=1 } N i=1 R N }/ where {p 1 i }N i=1 {p2 i }N i=1 if and only if p1 i = p 2 i + c for each i An identification is given from T ρ M to W: {p 1 i }N i=1 {σ1 i }N i=1 σ i = (p i p j )ρ j + (p i p j )ρ i j N(i);Ψ i <Ψ j j N(i);Ψ i >Ψ j ρ i ρ j + (p i p j ) (4) log ρ i log ρ j j N(i);Ψ i =Ψ j
Theorem 1 - discrete Otto Calculus Theorem 2 - intepret noise Discrete Otto s Calculus Recall that 2-Wasserstein distance on R n (or manifold) is equivalent to the following Otto s formula σ = (ρ p) g(σ 1, σ 2 ) = M for σ T ρ M ρ p 1 p 2 The fundamental idea is to discretize the Otto s formula
The Norm Introduction Theorem 1 - discrete Otto Calculus Theorem 2 - intepret noise The following inner product is well defined on T ρ M g(σ 1, σ 2 ) = N pi 1 σi 2 i=1 If σ 1 = σ 2 = σ, the norm is defined as: g(σ, σ) = ρ i (p i p j ) 2 + (p i p j ) 2 ρ i ρ j log ρ i log ρ j {a i,a j } E,Ψ i >Ψ j {a i,a j } E;Ψ i =Ψ j Clearly, g(σ, σ) 0, and g induces a distance d Ψ on M ẉhere {p i } N i=1 {σ i} N i=1 under the identification (4)
Upper Bound Introduction Theorem 1 - discrete Otto Calculus Theorem 2 - intepret noise The upper bound metric d M (, ) is induced by inner product g M (σ 1, σ 2 ) = N σi 1 pi 2 i=1 when σ 1 = σ 2 = σ on T ρ M, we have g M (σ, σ) = (p i p j ) 2 min(ρ i, ρ j ) {a i,a j } E where [{p i } N i=1 ] {σ i} N i=1 by the identification σi M = (p i p j ) min(ρ i, ρ j ) {a i,a j } E
Lower Bound Introduction Theorem 1 - discrete Otto Calculus Theorem 2 - intepret noise The lower bound metric d m (, ) is induced by inner product g m (σ 1, σ 2 ) = N σi 1 pi 2 i=1 when σ 1 = σ 2 = σ on T ρ M, we have g m (σ, σ) = (p i p j ) 2 max(ρ i, ρ j ) {a i,a j } E where [{p i } N i=1 ] {σ i} N i=1 by the identification σi m = (p i p j ) max(ρ i, ρ j ) {a i,a j } E
The Gradient Flow Introduction Theorem 1 - discrete Otto Calculus Theorem 2 - intepret noise F = N i=1 Ψ iρ i + β N i=1 ρ i log ρ i is the free energy, its gradient flow is given by dρ dt = gradf ρ on the Riemannian manifold (M, g), can be rewitten as g ρ ( dρ dt, σ) = df ρ(σ) where df is the differential of F σ T ρ M
Fokker-Planck Equation Theorem 1 - discrete Otto Calculus Theorem 2 - intepret noise By a direct computation, we obtain the Fokker-Planck equation in Theorem 1: dρ i dt = + ((Ψ j + β log ρ j ) (Ψ i + β log ρ i ))ρ j j N(i),Ψ i <Ψ j ((Ψ j + β log ρ j ) (Ψ i + β log ρ i ))ρ i j N(i),Ψ i >Ψ j + β(ρ j ρ i ) j E(i)
Boundary is a Repeller Theorem 1 - discrete Otto Calculus Theorem 2 - intepret noise Consider M = M M 0 < ϵ << 1, B(ϵ) which is neighborhood of M such that {ρ 0 i }N i=1 B(ϵ) M, T > 0 such that {ρ i (t)} N i=1 / B(ϵ), t > T where {ρ i (t)} N i=1 means the solution of equation (2) with initial value {ρ 0 i }N i=1
Theorem 2 Introduction Theorem 1 - discrete Otto Calculus Theorem 2 - intepret noise Given a graph G = (V, E), a gradient Markov process on graph G generated by potential {Ψ i } N i=1, suppose the process is subjected to white noise with strength β > 0 we have a Fokker-Planck equation (different from the equation in Theorem 1) dρ i dt = + where Ψ i = Ψ i + β log ρ i ((Ψ j + β log ρ j ) (Ψ i + β log ρ i ))ρ j j N(i); Ψ j > Ψ i j N(i); Ψ j > Ψ i ((Ψ j + β log ρ j ) (Ψ i + β log ρ i ))ρ i
Theorem 1 - discrete Otto Calculus Theorem 2 - intepret noise Theorem 2 Equation (3) is the gradient flow of free energy for the metric space (M, d) Gibbs density ρ is a stable stationary solution of equation (3) and is the global minimum of free energy Given any initial data ρ 0 M, we have a unique solution ρ(t) : [0, ) M with initial value ρ 0, and lim t ρ(t) = ρ The boundary of M is a repeller of equation (3)
Theorem 1 - discrete Otto Calculus Theorem 2 - intepret noise Idea of Proof dx dt = Ψ(x) dx dt = Ψ(x) + 2βdW t gradient flow on the graph gradient flow subject to white noise perturbation ρ t = ( Ψρ) + β ρ Fokker-Planck equation in Theorem 2
Theorem 1 - discrete Otto Calculus Theorem 2 - intepret noise Gradient Flow on the Graph We call the following Markov process X (t) a gradient Markov process generated by potential {Ψ i } N i=1 : For {a i, a j } E, if Ψ i > Ψ j, the transition rate q ij from i to j is Ψ i Ψ j
Remark: Markov process Theorem 1 - discrete Otto Calculus Theorem 2 - intepret noise Consider a time-homogeneous continuous time Markov process X (t) with transition graph G For the transition rate q ij = k (i j), we mean P(X (t + h) = a j X (t) = a i ) = q ij h + o(h) and its Kolmogorov forward equation is dρ i dt = j N(i),Ψ i >Ψ j (Ψ j Ψ i )ρ i + where ρ i (t) denotes the probability of {X (t) = a i } j N(i),Ψ j >Ψ i (Ψ j Ψ i )ρ j (5)
Two Observations Introduction Theorem 1 - discrete Otto Calculus Theorem 2 - intepret noise Free Energy F = Ψ i ρ i + β ρ i log ρ i = (Ψ i + β log ρ i )ρ i F = Ψρ + β ρ log ρ = (Ψ + β log ρ)ρ R N R N R N Fokker-Planck equation ρ t = ( Ψρ) + β ρ = ( Ψρ + β ρ) = [( Ψ + β ρ/ρ)ρ] = [ (Ψ + β log ρ)ρ] (6)
New Potential Introduction Theorem 1 - discrete Otto Calculus Theorem 2 - intepret noise From above observations, we know that in a system with noise, the motion depends on both probability distribution and potential In some sense, the noise alters the potential New Potential With noise, the potential Ψ becomes another potential depending on ρ: Ψ = Ψ + β log ρ
Markov Process with Noise Theorem 1 - discrete Otto Calculus Theorem 2 - intepret noise Then we have a new Markov process X β (t) which is time inhomogeneous, and could be considered as a white noise perturbation from the original Markov process X (t) If {a i, a j } E, and Ψ i > Ψ j, then the transition rate of X β (t) from a i to a j is Ψ i Ψ j = (Ψ i + β log ρ i ) (Ψ j + β log ρ j )
Fokker-Planck Equation Theorem 1 - discrete Otto Calculus Theorem 2 - intepret noise Set ρ i = P(X β (t) = a i ), the Kolmogorov forward equation is: dρ i dt = + ((Ψ j + β log ρ j ) (Ψ i + β log ρ i ))ρ i j N(i), Ψ i > Ψ j ((Ψ j + β log ρ j ) (Ψ i + β log ρ i ))ρ j j N(i), Ψ i < Ψ j This is the Fokker-Planck equation (3) in Theorem 2
The Metric Introduction Theorem 1 - discrete Otto Calculus Theorem 2 - intepret noise Recall in the Theorem 2, Equation (3) is the gradient flow of free energy for a metric space (M, d) the metric By the metric space (M, d), we mean the distance induced by the inner product on T ρm N ḡ(σ 1, σ 2 ) = σi 1 pi 2 where [{p i } N i=1 ] {σ i} N i=1 by the identification σ i = (p i p j )ρ j + (p i p j )ρ i j N(i); Ψ i < Ψ j j N(i); Ψ i > Ψ j ρ i ρ j + (p i p j ) (7) log ρ i log ρ j j N(i); Ψ i = Ψ j i=1
Division of the Space Introduction Theorem 1 - discrete Otto Calculus Theorem 2 - intepret noise The space M is divided into segments by N2 N 2 submanifolds S ij : S ij = {{ρ i } N i=1 ρ i > 0; Ψ i + β log ρ i = Ψ j + β log ρ j } ḡ(, ) is smooth on each component divided by {S ij }, and gives a smooth Riemmannian distance on every component Remark Ṭhe submanifolds S ij intersect at one point Gibbs distribution For every ρ M that is not on any S ij, there exists a small neighborhood, such that the solution of (3) in Theorem 2 passing though ρ is the gradient flow of the free energy with respect to ḡ
Theorem 1 - discrete Otto Calculus Theorem 2 - intepret noise Picture
The other terms Introduction Theorem 1 - discrete Otto Calculus Theorem 2 - intepret noise The remaining proof for Theorem 2 is similar to that for Theorem 1 1 Gibbs density ρ is a stable stationary solution of equation (3) 2 Given any initial data ρ 0 M, we have a unique solution ρ(t) : [0, ) M with initial value ρ 0, and lim t ρ(t) = ρ 3 The boundary of M is a repeller of equation (3)
Weighted Graph Introduction Geometry of the Metric Space Some Remarks Recall the identification (4) of distance d Ψ, we can rewrite it as σ = L(G, ρ) p where L(G, ρ) is the Laplacian matrix of the weighted graph G = (V, E, w) V and E are given, and w is the weight on edges: ρ i if e ij E, Ψ i > Ψ j w ij = ρ j if e ij E, Ψ i < Ψ j ρ i ρ j log ρ i log ρ j if e ij E, Ψ i = Ψ j the weight depends on the current probability distribution
Laplacian Matrix Introduction Geometry of the Metric Space Some Remarks Let G = (V, E, w) be a weighted graph, then the Laplacian matrix L is given by L ij = w ij if e ij E N L ii = e ij E j=1 w ij
Weighted Graph Introduction Geometry of the Metric Space Some Remarks Similarly, the distance d m and d M could be rewitten as matrix forms For d m { ρi if ρ w ij = i > ρ j ρ j if ρ i < ρ j For d M { ρi if ρ w ij = i < ρ j ρ j if ρ i > ρ j
Relationship with Euclidean Distance So for σ T M, we have the Euclidean norm of σ is g ρ ( σ, σ) = p T L(G, ρ) p Geometry of the Metric Space Some Remarks σ 2 = p T L(G, ρ) T L(G, ρ) p which means when ρ is not on M, we have 1 λ N σ 2 g ρ ( σ, σ) 1 λ 2 σ 2 λ N is the largest eigenvalue of L(G, ρ), λ 2 is the smallest positive eigenvalue of L(G, ρ)
Eigenvalue Bound - Upper Bound Geometry of the Metric Space Some Remarks There are classical results about the eigenvalue bounds of the Laplacian matrices For example, Oscar Rojo gives the following upper bounds λ N N j=1 max w kj k j λ N 1 { 2 max wi + w j + k i,k j w ik + k j,k i w jk i j + k i,k j w ik w jk where i j means e ij E }
Eigenvalue Bound - Lower Bound Geometry of the Metric Space Some Remarks Abraham Berman and Xiao-Dong Zhang gives the following lower bound λ 2 σ σ 2 i c (G) 2 where σ is the maximal of the diagonal elements in L(G, ρ), and i c (G) is called the iso perimetric number is G: i c (G) = min ( X V, X N/2 i X,j X w ij / X )
The Geodesic Introduction Geometry of the Metric Space Some Remarks We can compute the geodesic equation on the space M For simplicity, we assume the potential function on vertices are all distinct Note that the geodesic is of constant speed so it s also the minimum of the energy functonal E(γ) = g( γ, γ)dt So the equation of geodesic satisfies the Euler-Lagrange equation However, the Hamiltonian equation reveals more information about the metric d Ψ
The Geodesic Introduction Geometry of the Metric Space Some Remarks After legendre transformation, we obtain the following Hamiltonian equation dp i dt dρ i dt = 1 (p i p j ) 2 (8) 2 j N(i),Ψ i >Ψ j = (p i p j )ρ i + (p i p j )ρ j (9) j N(i),Ψ i >Ψ j j N(i),Ψ i <Ψ j Remark This is the upwind discrete of geodesic equation on 2-Wasserstein space
Geometry of the Metric Space Some Remarks The Geodesic
One Example Introduction Geometry of the Metric Space Some Remarks d Ψ (ρ 1, ρ 2 ) = 2 N ( ρ 1 i i=1 ρ 2 i )
1 the Master Equation Geometry of the Metric Space Some Remarks Master Equation In (3), when the potential function is a constant function, we obtained the master equation dρ i dt = β {a i,a j } E (ρ j ρ i ) Master equation also describes the time evolutionary of probability distribution of the standard random walk And the entropy increases monotonically along the master equation
Geometry of the Metric Space Some Remarks 2 From Markov Process to Fokker-Planck Equation Let s consider the Markov process on a graph, without any potential function assigned Given a time-homogeneous Markov process In some cases, we can still give a potential energy based on this Markov process, like this picture
Geometry of the Metric Space Some Remarks 2 From Markov Process to Fokker-Planck Equation More precisely, if the transition probability graph of the given random walk is a gradient like graph, then the potential energy function is well defined For a weighted directed simple graph without, if every directed path from i to j have the same total weight for any two vertices i and j, then a potential energy function is well defined on this graph Moreover, the potential function is unique up to a constant example Remark Every connected tree with weighted edges is a gradient like graph
Geometry of the Metric Space Some Remarks 2 From Markov Process to Fokker-Planck Equation Figure: The left graph is gradient like
3 Upwind Scheme Introduction Geometry of the Metric Space Some Remarks On lattices, both equation (2) and (3) has upwind structure The drift part of equation (2) is a upwind scheme with respect to potential {Ψ i } N i=1 ; where the whole equation of (3) is a upwind scheme with equivalent potential { Ψ i } N i=1 Conservation Law In numerics, upwind scheme is a common used conservative scheme for the nonlinear hyperbolic equation
Geometry of the Metric Space Some Remarks 4 The Fundamental Relationship For the discrete space case, the free energy, Fokker-Planck equations and stochastic process have the same strong relationship as in continuous case
Fokker-Planck equation on a Graph A Markov Process on Graph Then we will show one general example about discrete Fokker-Planck equation for on a graph Suppose G = (X, E) is a graph, and a Markov process is defined on this graph Then we will show: The Markov process The graph with potential The discrete Fokker-Planck equation of the perturbed Markov process The Gibbs disribution
Fokker-Planck equation on a Graph The Markov Process This is the graph The black number is the label, and the red number is the transition rate
The Transition Matrix Introduction Fokker-Planck equation on a Graph The transition probability rate matrix Q of this Markov process is: 0 1 0 0 0 0 0 0 0 0 Q = 0 2 0 0 1 0 0 1 0 2 0 1 0 0 0
Fokker-Planck equation on a Graph The Graph with Potential Since the underlying weighted directed graph is gradient like, we may associate the potential energy on each vertices:
Fokker-Planck equation on a Graph The Equation Suppose the strengthen of noise is β = 09, then the Fokker-Planck equation is: dρ 1 /dt = ρ 1 09(log ρ 1 log ρ 2 )ρ 1 dρ 2 /dt = ρ 1 + 2ρ 3 + ρ 5 + 09((log ρ 5 log ρ 2 )ρ 5 + (log ρ 3 log ρ 2 )ρ 3 +(log ρ 1 log ρ 2 )ρ 1 ) dρ 3 /dt = ρ 4 3ρ 3 09((log ρ 3 log ρ 2 )ρ 3 + (log ρ 3 log ρ 5 )ρ 3 (log ρ 4 log ρ 3 )ρ 4 ) dρ 4 /dt = 3ρ 4 09((log ρ 4 log ρ 3 )ρ 4 + (log ρ 4 log ρ 5 )ρ 4 ) dρ 5 /dt = ρ 5 + 2ρ 4 + ρ 3 + 09( (log ρ 5 log ρ 2 )ρ 5 +(log ρ 4 log ρ 5 )ρ 4 + (log ρ 3 log ρ 5 )ρ 3 )
Fokker-Planck equation on a Graph The Gibbs Distribution Then the discrete Fokker-Planck equation converges to the Gibbs distribution, which is showed in the picture( The noise level is β = 09)
Fokker-Planck equation on a Graph Free Energy vs Time And the Free energy decreases with time:
Things to do Introduction Fokker-Planck equation on a Graph This is the beginning of our work, many questions still don t have answer 1 rate of convergence (Just done by a student of Wen Huang) 2 Functional inequalities 3 Convexity of entropy or relative entropy (free energy) 4 Relationship between convexity of entropy and nonnegative 5 Ricci curvature Application in the other area 6
Fokker-Planck equation on a Graph