1 On a class of stochastic differential equations in a financial network model Tomoyuki Ichiba Department of Statistics & Applied Probability, Center for Financial Mathematics and Actuarial Research, University of California, Santa Barbara Part of research is joint work with Nils Detering & Jean-Pierre Fouque Thira May 217 Thera Stochastics
Motivation: On (; F ; (F t ); P) let us consider R N -valued diffusion process (X 1 (t ); : : : ; X N (t )), t < 1 induced by the following random graph structure. Suppose that at time we have a random graph of N vertices f1; : : : ; N g and define the strength of connections between vertices i and j by F -measurable random variable a i ;j (whose distribution may depend on N ) for every 1 i 6= j N and fix a i ;i = for 1 i N. We shall consider dx i (t ) = 1 N NX j=1 for i = 1; : : : ; N, t, a i ;j (X i (t ) X j (t )) dt + db i (t ) ;
where (B 1 (t ); : : : ; B N (t )), t is the standard N -dimensional BM, independent of (X 1 (); : : : ; X N ()) and of the random variables (a i ;j ) 1i ;j N. The randomness determined at time affects the diffusion process (X 1 ( ); : : : ; X N ( )). Deterministic A in the context of financial network : Carmona, Fouque, Sun ( 13), Fouque & Ichiba ( 13),... The system is solvable as a linear stochastic system for X ( ) := (X 1 ( ); : : : ; X N ( )). Let A (N) := (a i ;j ) 1i ;j N be the (N N ) random matrix and B ( ) be the (N 1) -vector valued standard Brownian motion.
Then the system can be rewritten as dx (t ) = A (N) X (t )dt + db (t ) ; A (N) 1 := N Diag(A(N) 1 1 N ) N A(N) ; where 1 N is the (N 1) vector of ones, and Diag(c) is the diagonal matrix whose diagonal elements are those elements in the vector c. Note that each row sum of elements in the matrix A (N) is zero by definition, i.e., a (N) i ;i = X j 6=i a (N) i ;j for each i = 1; : : : ; N, where a (N) i ;j random matrix A (N). is the (i ; j ) element of the
5 The solution to this linear equation is given by X (t ) = e ta(n ) X () + e sa(n ) db (s) ; t : Here we understand e ta(n ) is the (N N ) matrix exponential. Given the initial value X () and A (N), the law of X ( ) is conditionally an N -dimensional Gaussian law with mean e ta(n ) X () and variance covariance matrix Var(X (t )ja (N) ). Q. How to understand the case N! 1 of large network, i.e., what happens if N! 1? For example, if A (N) a :s :! A(1) and N!1
if a (1) i ;j = 1 for j = i + 1, a (1) i ;i = 1, a (1) i ;j, o.w., i.e., A (1) := B @ 1 1 1 1......... 1 C A ; then how can we solve? dx 1 (t ) = (X 2 (t ) X 1 (t ))dt + dw 1 (t ) ; dx 2 (t ) = (X 3 (t ) X 2 (t ))dt + dw 2 (t ) ; Finite N case: Fernholz & Karatzas ( 8-9) studied flow, filtering and pseudo-brownian motion process in equity markets..
7 Example as N! 1 For simplicity let us set X i () =. Given X 2 ( ), we have X 1 (t ) = e (t s) X 2 (s)ds + e (t s) db 1 (s) ; and also, given X 3 ( ), we have X 2 (s) = Z s e (s u) X 3 (u)du + Z s e (s u) db 2 (u) for t, and hence substituting X 2 ( ) into the first one, X 1 (t ) = for t. + e (t s) db 1 (s) + Z s Z e (t s) s e (s u) X 3 (u)du e (t u) db 2 (u)ds
By the product rule for semimartingales, we observe Z s e u (s u) k 1 db (u)ds = for k 2 N, t, and hence u (t u)k e db (u) ; k Z s e u db (u)ds = e u (t u)db (u) ; Z sk Z s1 e u db (u)ds 1 ds k = u (t u)k e db (u) k! for k 2 N, t. Thus for the above example we have for t. X 1 (t ) = 1X k= e (t u) (t u)k k! db k+1(u)
9 X 1 (t ) = 1X k= e (t u) (t u)k k! is a centered, Gaussian process with covariances E[X 1 (s)x 1 (t )] = e (s+t) 1 X k= Z s Z = e (t s) s e 2v I (2 q db k+1(u) e 2u (k!) 2 (s u)k (t u) k du (t s + v )v )dv for s t, where I ( ) is the modified Bessel function of the first kind with parameter, i.e., I (x ) := for x >, 1. 1X k= x 2 2k+ 1 (k + 1) ( + k + 1)
1 In particular, Var(X 1 (t )) = e 2v I (2v )dv = te 2t (I (2t ) + I 1 (2t )) < 1 (it grows as O(t 1=2 ) for large t, also, E[X 1 (s)x 1 (s + t )] = O(e (t 2p (t+s)s) t 1=4 ) :) Thus X 1 ( ) is not stationary. The (marginal) distribution of X k ( ), k 2 N is the same as X 1 ( ), and hence, we may compute (at least numerically) E[X 1 (t )X 2 (u)] = e (t s) E X 2 (s)x 2 (u) ds = e (t s) E X 1 (s)x 1 (u) ds and recursively, E[X 1 (t )X k (u)], k 2 N for t ; u < 1.
Sample path of (X 1 ( ); X 2 ( )) generated from the covariance structure. X 1 (t ) = X 1 ( ) X 2 ( ) e (t s) X 2 (s)ds + for t and Law(X 1 ( )) = Law(X 2 ( )). e (t s) db 1 (s) ;
Weighted by Poisson probabilities An interpretation of for t : X 1 (t ) = =: 1X k= 1X k= e (t u) (t u)k k! p k (t u)db k+1(u) db k+1(u) Suppose N (s) ; s t is a Poisson process with rate 1, independent of (B k ( ); k 2 N). Then h 1X X 1 (t ) = E k= 1 fn(t u) = kgdb k+1(u) F (t ) where F (t ) := (B k (s); s t ; k 2 N), t. i ;
If we replace the Poisson probability by compound Poisson probability, i.e., fn (t ) := N(t) X k=1 k ; where ( k ; k 2 N) are I.I.D. integer-valued R.V. s with P( 1 = i ) = p i, 1 i q, P q i=1 p i = 1 for some q 2 N, independent of N ( ) and (B k ( ); k 2 N), then where fx 1 (t ) := E = ep k (t ) := for k 2 N, t h 1X k= 1X k= h @k @z k exp 1 fen(t u) = kg db k+1(u) F (t ) ep k (t u)db k+1(u) ; q X i=1 13 i p i t (z i z 1) = i
14 corresponds to the modified matrix A (1) := B @ 1 p 1 p 2 p q 1 p 1 p 2 p q..................... 1 C A ; and dx k (t ) = X k (t ) + qx i=1 p i X i+k (t ) dt + db k (t ) with X k () = for k 2 N, t. In particular, if q = 2, p 1 = p 2 = 1=2, then ep k (t ) = bk X=2c j= e t t k j 2 k j (k 2j )! j! ; t ; k 2 N :
15 Another modification: dx 1 (t ) = (X 1 (t ) X 2 (t ))dt + db 1 (t ) ; dx 2 (t ) = (X 2 (t ) X 3 (t ))dt + db 2 (t ) ; We may use the same reasoning in this case to obtain X 1 (t ) = 1X k= e t s ( 1)k (t s) k k! with exponentially growing variance db k+1(s) Var(X 1 (t )) = te 2t (I (2t ) I 1 (2t )) ; t :
A formulation of equation with identical distribution Let us consider (; F ; P; ff (t ); t g) on which X ( ) is an adapted stochastic process which is a weak solution to dx (t ) = b(t ; X (t ); X 1 (t ))dt + (t ; X (t ); X 1 (t ))db (t ) ; t ; where r -dimensional standard Brownian motion B ( ) is independent of d -dimensional process X 1 ( ) which has the same distribution as X ( ) on [; T ] i.e., Law(X (s); s T ) = Law(X 1 (s); s T ) ; and also P a.s. X () = x 2 R d and Z T (kb(t ; X (t ); X 1 (t ))k + k i ;j (t ; X (t ); X 1 (t ))k 2 )dt < +1 for 1 i d, 1 j r and T. Here we assume
b : R+ R d R d! R d and : R+ R d R d! R d r are Lipschitz continuous with at most linear growth, i.e., there exist a constant K > such that kb(t ; x ; y) b(t ; ex ; ey)k + k(t ; x ; y) (t ; ex ; ey)k K (kx ex k + ky and kb(t ; x ; y)k 2 + k(t ; x ; y)k 2 K 2 (1 + kx k 2 + kyk 2 ) for every (t ; x ; y) 2 R+ R d R d. We also assume that X 1 ( ) is adapted to the filtration ff 1 (t ); t g generated by the Brownian motions (B k (t ); k 1; t ) augmented by the P -null sets. We shall solve this system with distributional identity.
When b(t ; x ; y) = x y and (t ; x ; y) = 1, it reduces to the first example X 1 (t ) = e (t s) X 2 (s)ds + e (t s) db 1 (s) ; t : In the linear case we may consider the corresponding A (1) to the example of the block matrix form A (1) = B @ A 1;1 A 1;2 A 1;1 A 1;2............... 1 C A : It looks similar to the nonlinear diffusion dx (t ) = b(x (t ); E[X (t )])dt + (X (t ); E[X (t )])db (t ) ; t of mean-field which appears as Mckean-Vlasov limit of interacting particles (Mckean ( 67), Kac ( 73), Sznitman ( 89), Tanaka ( 84), Shiga & Tanaka ( 85),... ), but is different.
Proposition. On some probability space (; F ; P; ff (t ); t g) there is a unique weak solution to dx (t ) = b(t ; X (t ); X 1 (t ))dt + (t ; X (t ); X 1 (t ))db (t ) ; t ; with Law(X 1 (t ); t T ) = Law(X (t ); t T ) and satisfies (t ) = E[ sup kx (s) X 1 (s)k 2 ] ; t st (s)ds + e (t s) Z s (u)du ds c 1 (t ) ; for t T, T >, where := 4K 2 ( 1 + T ), c := 9 max(1; K 2 ( 1 + T )(1 _ T )) ; c 1 := and 1 is a global constant form the Burkholder-Davis-Gundy inequality. 19 1 e(c )T c
Idea of proof: Given B ( ) and X 1 ( ), we may construct X ( ) by the method of Picard iteration, i.e., there exists a map : C ([; 1); R d ) C ([; 1); R r )! C ([; 1); R d ) with X (t ) = t (X 1 ; B ) for t. We shall find a fixed point of ( ; B ), i.e., Law(X 1 ( )) = Law( (X 1 ; B )) = Law(X ( )) by evaluating the Wassestein distance W 2;T (; e) := inf E [ sup k(t ) (t e )k 2 ] tt where = Law(( )), e = Law(e ( )) and the infimum is taken over the joint law of (( ); e ( )), and using Banach fixed point theorem.
S := n ( ) : (s)ds + e (t s) Z s c 1 (t ) ; t T (u)du ds Note that c >, ( 1 e (c )T ) = ( c ) < 1 and, 1 (t ) := e ct satisfies 1 (s)ds+ e (t s) Z s 1 (u)du ds = c 1 1 (t ) ; t T ; and so, 1 ( ) 2 S and S is non-empty. If f 2 S, then c f 2 S for every positive constant c >. o
22 Also, if f ; g 2 S, then a f + (1 a) g 2 S for every a 2 [; 1], and hence, S is convex. Moreover, since ( 1 e (x )T ) = ( x ) is a non-decreasing function of x for x >, 2 (t ) := e c2t with < c 2 c satisfies 2 (s)ds+ e (t s) Z s 2 (u)du ds = 1 e (c2 )T c 2 2 (t ) c 1 2 (t ) ; t T ; and hence 2 ( ) 2 S for every < c 2 c.
We may extend to the case of the form dx (t ) = b(t ; X (t ); : : : ; X q (t ))dt +(t ; X (t ); : : : ; X q (t ))db (t ) with Law(X ( )) = Law(X 1 ( )) = = Law(X q ( )) for some q 2 N and with Lipschitz coefficients, where X i ( ) is adapted to the filtration generated by (B k ( ); k i ) for i 2 N.
24 Coming back to the diffusions on the graph We say the infinite dimensional matrix x = (x i ;j )(i ;j)2n 2 row-finite if for each i 2 N there is k (i ) 2 N such that x i ;j = for every j k (i ). is We say the infinite dimensional matrix x = (x i ;j )(i ;j)2n 2 is uniformly row-finite, if there is n 2 N such that x i ;j = for every i 2 N and every j with ji j j n. We also say the infinite dimensional matrix (x i ;j )(i ;j)2n 2 (uniformly) column-finite, if its transpose (x i ;j ) (i ;j)2n is 2 (uniformly) row finite. is Let us denote by A the class of uniformly positive definite, bounded, infinite dimensional matrices which are both uniformly row and uniformly column finite.
Suppose that there exist u > d > such that all the eigenvalues of A (N) are bounded above by u and below by d for every N, and as N! 1, each (i ; j ) element a (N) i ;j of A (N) converges to an (i ; j ) element a (1) i ;j of fixed matrix A (1) 2 A almost surely for every (i ; j ) 2 N 2, i.e., lim a(n) i ;j = a (1) i ;j : N!1 Assume the first k elements X (k ;N) () = (X 1 (); : : : ; X k ()) of initial random variables X (N) () converges weakly to an R k -valued random vector (k) for every k 2 N. We also assume that sup N E[kX (k ;N) ()k 4 ] < 1 for every k 2 N.
26 Then for every k 2 N and T >, as N! 1, the law of the first k elements X (k ;N) ( ) = (X 1 ( ); : : : ; X k ( )) of X (N) ( ) = (X 1 ( ); : : : ; X N ( )) converges weakly in C ([; T ]) to the law of the first k -dimensional stochastic process Y (k) ( ) := (Y 1 ( ); : : : ; Y k ( )) of Y ( ) := (Y i ( )) i2n defined by Y (t ) = e ta(1) Y () + e sa(1) dw (s) ; t ; where Law(Y (k) ()) = Law( (k) ) for every k 2 N and W ( ) is the R 1 -valued standard Brownian motion.
Summary Thank you all for your attentions, and Happy Birthday! Examples of linear systems on infinite graph A class of stochastic differential equations with restrictions in their distribution Part of research is supported by grants NSF -DMS-13-13373 and DMS-16-15229.