EXPONENTIAL STABILITY AND INSTABILITY OF STOCHASTIC NEURAL NETWORKS 1. X. X. Liao 2 and X. Mao 3

Size: px
Start display at page:

Download "EXPONENTIAL STABILITY AND INSTABILITY OF STOCHASTIC NEURAL NETWORKS 1. X. X. Liao 2 and X. Mao 3"

Transcription

1 EXPONENTIAL STABILITY AND INSTABILITY OF STOCHASTIC NEURAL NETWORKS X. X. Liao 2 and X. Mao 3 Department of Statistics and Modelling Science University of Strathclyde Glasgow G XH, Scotland, U.K. ABSTRACT In this paper we shall discuss stochastic effects to the stability property of a neural network u(t) = Bu(t) + Ag(u(t)). Suppose the stochastically perturbed neural network is described by an Itô equation dx(t) = [ Bx(t) + Ag(x(t))]dt + σ(x(t))dw(t). The general theory on the almost sure exponential stability and instability of the stochastically perturbed neural network is first established. The theory is then applied to investigate the stochastic stabilization and destabilization of the neural network. Several interesting examples are also given for illustration.. Introduction Much of the current interest in artificial networks stems not only their richness as a theoretical model of collective dynamics but also from the promise they have shown as a practical tool for performing parallel computation (cf. Denker [2]). Theoretical understanding of neural-network dynamics has advanced greatly in the past ten years (cf. [, 4 7, ]). The neural network proposed by Hopfield [4] can be described by an ordinary differential Supported by the Royal Society. 2 Permanent Address: Department of Mathematics, Huazhong Normal University, Wuhan, P.R.China. 3 For any correspondence regarding this paper please address to this author.

2 equation of the form C i u i (t) = R i u i (t) + T ij g j (u j (t)), i n, (.) j= on t 0. The variable u i (t) represents the voltage on the input of the ith neuron. Each neuron is characterized by an input capacitance C i and a transfer function g i (u). The connection matrix element T ij has a value + /R ij when the noninverting output of the jth neuron is connected to the input of the ith neuron through a resistance R ij, and a value /R ij when the inverting output of the jth neuron is connected to the input of the ith neuron through a resistance R ij. The parallel resistance at the input of each neuron is defined R i = ( n j= T ij ). The nonlinear transfer function g i (u) is sigmoidal, saturating at ± with maximum slope at u = 0. In term of mathematics, that is g(u) is nondecreasing, ug i (u) 0 and g i (u) β i u for all < u <, (.2) where β i is the slope of g i (u) at u = 0 and is supposed to be finite. defining b i =, a ij = T ij C i R i C i equation (.) can be re-written as By u i (t) = b i u i (t) + a ij g j (u j (t)), i n, (.3) j= or equivalently where u(t) = Bu(t) + Ag(u(t)), t 0, (.4) u(t) = (u (t),, u n (t)) T, B = diag.(b,, b n ), A = (a ij ) n n, g(u) = (g (u )),, g n (u n )) T. Moreover, we always have b i = a ij, i n. (.5) j= It is clear that whenever given an initial data u(0) = x o R n equation (.4) has a unique global solution on t 0. Especially, the equation admits

3 an equilibrium solution u(t) 0 (i.e. the solution when the initial data u(0) = 0). The stability problem of this equilibrium solution has been studied by many authors e.g. Coben & Crosshery [], Liao [7], Quezz et al. []. The aim of this paper is to investigate the stochastic effects to the stability. Suppose there exists a stochastic perturbation to the neural network and the stochastically perturbed network is described by a stochastic differential equation { dx(t) = [ Bx(t) + Ag(x(t))]dt + σ(x(t))dw(t) on t 0, x(0) = x o R n, (.6) where w(t) = (w (t),, w m (t)) T is an m-dimensional Brownian motion defined on a complete probability space (Ω, F, P ) with a natural filtration {F} t 0 (i.e. F t = σ{w(s) : 0 s t}), and σ : R n R n m i.e. σ(x) = (σ ij (x)) n m. Throughout this paper we always assume that σ(x) is locally Lipschitz continuous and satisfies the linear growth condition as well. So it is known (cf. Friedman [3] or Mao [9]) that equation (.6) has a unique global solution on t 0, which is denoted by x(t; x o ). Moreover, we also assume σ(0) = 0 for the stability purpose of this paper. So equation (.6) admits an equilibrium solution x(t; 0) 0. It is also easy to see from the uniqueness that whenever the initial data x o 0, the solution will never be zero with probability one, that is x(t, x o ) 0 for all t 0 a.s. Now that equation (.6) is a stochastically perturbed system of equation (.4), it is interesting to know how the stochastic perturbation effects the stability property of equation (.4). That is, when equation (.4) is stable, it is useful to know whether the perturbed equation (.6) remains stable or becomes unstable; but when equation (.4) is unstable, it is then useful to know whether the perturbed equation (.6) becomes stable or remains unstable. In following sections we shall discuss these problems in detail. 2. Exponential Stability In this section we shall discuss the exponential stability of the stochastic neural network (.6). Theorem 2. Assume there exists a symmetric positive definite matrix Q = (q ij ) n n and a pair of numbers µ R and ρ 0 such that 2x T Q[ Bx + Ag(x)] + trace[σ T (x)qσ(x)] µx T Qx, (2.) x T Qσ(x)σ T (x)qx ρ(x T Qx) 2 (2.2)

4 for all x R n. Then the solution of equation (.6) satisfies t log( x(t; x o) ) (ρ µ ) a.s. (2.3) 2 whenever x o 0. In particular, if ρ > µ/2 then the stochastic neural network (.6) is almost surely exponentially stable. Proof. Fix any x o 0 arbitrarily and write x(t; x o ) = x(t) simply. Note from the uniqueness of the solution that x(t) 0 for all t 0 a.s. So one can apply the well-known Itô formula to obtain ( ) d log[x T (t)qx(t)] = ( ) x T 2x T (t)q[ Bx(t) + Ag(x(t))] + trace[σ T (x(t))qσ(x(t))] dt (t)qx(t) 2 ( ) [x T (t)qx(t)] 2 x T (t)qσ(x(t))σ T (x(t))qx(t) dt In view of condition (2.) we obtain + 2 x T (t)qx(t) xt (t)qσ(x(t))dw(t). log[x T (t)qx(t)] log[x T o Qx o ] + µt 2 M(t) + 2M(t) a.s. (2.4) for all t 0, where M(t) = t 0 x T (s)qx(s) xt (s)qσ(x(s))dw(s) which is a continuous martingale vanishing at t = 0 and M(t) is its quadratic variation, i.e. M(t) = t o By condition (2.2) it is easy to see that ( ) [x T (s)qx(s)] 2 x T (s)qσ(x(s))σ T (x(s))qx(s) ds. M(t) ρt. (2.5) Now let k =, 2, and let ε (0, ) be arbitrary. Using the well-known exponential martingale inequality (cf Métivier [0]) one can derive that ( P ω : sup [M(t) ε M(t) ] > ) 0 t k 2ε log k k.

5 Hence the Borel-Cantelli lemma yields that for almost all ω Ω there exists a random integer k o (ω) such that for all k k o sup [M(t) ε M(t) ] log k, 0 t k 2ε that is, M(t) ε M(t) + log k, 0 t k. 2ε Substituting this into (2.4) yields log[x T (t)qx(t)] log[x T o Qx o ] + µt (2 ε) M(t) + ε log k for all 0 t k and k k o almost surely. By (2.5) one therefore obtains that log[x T (t)qx(t)] log[x T o Qx o ] [(2 ε)ρ µ]t + ε log k for all 0 t k and k k o almost surely. k t k and k k o then So for almost all ω Ω, if t log[xt (t)qx(t)] [(2 ε)ρ µ] + (log[x T o Qx o ] + ) k ε log k. This implies t log[xt (t)qx(t)] [(2 ε)ρ µ] a.s. Letting ε 0 we obtain One the other hand, note t log[xt (t)qx(t)] (2ρ µ) a.s. (2.6) λ min x 2 x T Qx, x R n since Q is a symmetric positive definite matrix, where λ min > 0 is the smallest eigenvalue of Q. Consequently, it follows from (2.6) that t log( x(t) ) (ρ µ 2 ) a.s. as required. The proof is complete.

6 We now employ this theorem to establish a number of useful corollaries. Corollary 2.2 Let (.2) hold. Assume that there exists a positive definite diagonal matrix Q = diag.(q, q 2,, q n ) and two real numbers µ > 0, ρ 0 such that trace[σ T (x)qσ(x)] µx T Qx, x T Qσ(x)σ T (x)qx ρ(x T Qx) 2 for all x R n. Let λ max (H) denote the biggest eigenvalue of the symmetric matrix H = (h ij ) n n defined by h ij = { 2qi [ b i + (0 a ii )β i ] for i = j, q i a ij β j + q j a ji β i for i j. Then the solution of equation (.6) satisfies ( t log( x(t; x o) ) ρ [ µ + λ ]) max(h) 2 min i n q i a.s. (2.7) if λ max (H) 0, or otherwise ( t log( x(t; x o) ) ρ [ µ + λ ]) max(h) 2 max i n q i a.s. (2.8) whenever x o 0. Proof. Compute, by (.2), 2x T QAg(x) = 2 x i q i a ij g j (x j ) i,j= 2 i q i (0 a ii )x i g i (x i ) + 2 i j x i q i a ij β j x j = 2 i q i (0 a ii )β i x 2 i + i j x i (q i a ij β j + q j a ji β i ) x j. Thus, in the case λ max (H) 0, 2x T Q[ Bx + Ag(x)] ( x,, x n ) H ( x,, x n ) T λ max (H) x 2 λ max(h) min i n q i x T Qx,

7 and then conclusion (2.7) follows from Theorem 2. easily. Similarly, in the case λ max (H) < 0, 2x T Q[ Bx + Ag(x)] λ max (H) x 2 λ max(h) max i n q i x T Qx and then conclusion (2.8) follows from Theorem 2. again. complete. The proof is Corollary 2.3 Let both (.2) and (.5) hold. Assume that there exist n positive numbers q, q 2,, q n such that q i [0 sign(a ii )] δ ij a ij q j b j, j n, where β 2 j i= Moreover assume δ ij = { for i = j, 0 for i j. trace[σ T (x)qσ(x)] µx T Qx, x T Qσ(x)σ T (x)qx ρ(x T Qx) 2 for all x R n, where Q = diag.(q, q 2,, q n ) and µ > 0, ρ 0 are both constants. Then the solution of equation (.6) satisfies whenever x o 0. Proof. Compute, by the conditions, 2x T QAg(x) = 2 2 i,j= i,j= i= t log( x(t; x o) ) (ρ µ 2 ) i,j= x i q i a ij g j (x j ) x i q i [0 sign(a ii )] δ ij a ij β j x j q i [0 sign(a ii )] δ ij a ij (x 2 i + β 2 j x 2 j) ( n ) q i a ij x 2 i + j= q i b i x 2 i + i= ( j= β 2 j i= q j b j x 2 j = 2x T QBx. j= a.s. ) q i [0 sign(a ii )] δ ij a ij x 2 j

8 Hence 2x T Q[ Bx + Ag(x)] + trace[σ T (x)qσ(x)] µx T Qx. Then the conclusion follows from Theorem 2.2. The proof is complete. Corollary 2.4 Let both (.2) and (.5) hold. Assume the network is symmetric in the sense Moreover assume a ij = a ji for all i, j n. trace[σ T (x)σ(x)] µ x 2, x T σ(x)σ T (x)x ρ x 4 for all x R n, where both µ > 0 and ρ 0 are both constants. Then the solution of equation (.6) satisfies that if ˇβ, or t log( x(t; x o) ) (ρ + ˆb( ˇβ) µ ) a.s. (2.9) 2 t log( x(t; x o) ) (ρ ˇb( ˇβ ) µ ) a.s. (2.0) 2 if < ˇβ whenever x o 0, where ˇβ = max i n β i, ˇb = max i n b i, ˆb = min i n b i. Proof. Compute 2x T Ag(x) = 2 2 i,j= i= i,j= x i a ij g j (x j ) x i a ij β j x j ˇβ = ˇβ [ n ( n ) a ij x 2 i + j= = ˇβ [ n b i x 2 i + i= i,j= ( n j= i= a ij (x 2 i + x 2 j) ) ] a ji x 2 j ] b j x 2 j = 2 ˇβx T Bx. j=

9 Hence 2x T [ Bx + Ag(x)] 2( ˇβ)x T Bx. Therefore, in the case ˇβ, 2x T [ Bx + Ag(x)] + trace[σ T (x)σ(x)] [ 2ˆb( ˇβ) + µ] x 2, and conclusion (2.9) follows from Theorem 2. with Q = the identity matrix. On the other hand, in the case < ˇβ, 2x T [ Bx + Ag(x)] + trace[σ T (x)σ(x)] [2ˇb( ˇβ ) + µ] x 2, and conclusion (2.0) follows from Theorem 2. again. The Proof is complete. 3. Exponential Instability In this section we shall discuss the exponential instability for the stochastic neural network described by equation (.6). Theorem 3. Assume there exists a symmetric positive definite matrix Q = (q ij ) n n and two real numbers µ R, ρ > 0 such that for all x R n. Then 2x T Q[ Bx + Ag(x)] + trace[σ T (x)qσ(x)] µx T Qx (3.) lim inf x T Qσ(x)σ T (x)qx ρ(x T Qx) 2 (3.2) t log( x(t; x o) ) µ 2 ρ a.s. (3.3) whenever x o 0. In particular, if ρ < µ/2 then the stochastic neural network (.6) is almost surely exponentially unstable. Proof. Fix any x o 0 arbitrarily and again write x(t; x o ) = x(t) simply. By the Itô formula as well as conditions (3.), (3.2) one can derive that log[x T (t)qx(t)] log[x T o Qx o ] + (µ 2ρ)t + 2M(t) a.s. (3.4) for all t 0, where M(t) = t 0 x T (s)qx(s) xt (s)qσ(x(s))dw(s) the same as before. Note from condition (3.2) that M(t) = t o ( ) [x T (s)qx(s)] 2 x T (s)qσ(x(s))σ T (x(s))qx(s) ds ρt.

10 It is known (cf. Liptser & Shiryayev [8]) that M(t)/t 0 almost surely as t. Consequently (3.4) yields But, note lim inf t log[xt (t)qx(t)] µ 2ρ a.s. (3.5) λ max x 2 x T Qx, x R n, where λ max > 0 is the biggest eigenvalue of Q. Hence it follows from (3.5) that lim inf t log( x(t) ) µ 2 ρ a.s. as required. The proof is complete. Corollary 3.2 Let (.2) hold. Assume that there exists a positive definite diagonal matrix Q = diag.(q, q 2,, q n ) and two positive numbers µ, ρ such that trace[σ T (x)qσ(x)] µx T Qx, x T Qσ(x)σ T (x)qx ρ(x T Qx) 2 for all x R n. Let λ min (S) denote the smallest eigenvalue of the symmetric matrix S = (s ij ) n n which is defined by s ij = { 2qi [ b i + (0 a ii )β i ] for i = j, q i a ij β j q j a ji β i for i j. Then the solution of equation (.6) satisfies lim inf whenever x o 0. t log( x(t; x o) ) [ µ + λ ] min(s) ρ a.s. (3.6) 2 min i n q i Proof. In the same way as the proof of Corollary 2.2 one can show that 2x T Q[ Bx + Ag(x)] ( x,, x n ) S ( x,, x n ) T λ min (S) x 2. Note that we must have λ min (S) 0 since all the elements of S are nonpositive. So 2x T Q[ Bx + Ag(x)] λ min(s) x T Qx min i n q i and then conclusion (3.6) follows from Theorem 3. easily. complete. The proof is

11 Corollary 3.3 Let both (.2) and (.5) hold. Assume the network is symmetric in the sense Moreover assume a ij = a ji for all i, j n. trace[σ T (x)σ(x)] µ x 2, x T σ(x)σ T (x)x ρ x 4 for all x R n, where both µ and ρ are positive numbers. Then the solution of equation (.6) satisfies that lim inf t log( x(t; x o) ) µ 2 ˇb( + ˇβ) ρ a.s. whenever x o 0, where ˇβ = max i n β i and ˇb = max i n b i. Proof. Compute Hence Therefore, 2x T Ag(x) = 2 2 i,j= i= i,j= x i a ij g j (x j ) x i a ij β j x j ˇβ = ˇβ [ n ( n ) a ij x 2 i + j= = ˇβ [ n b i x 2 i + i= i,j= ( n j= i= a ij (x 2 i + x 2 j) ) ] a ji x 2 j ] b j x 2 j = 2 ˇβx T Bx. j= 2x T [ Bx + Ag(x)] 2( + ˇβ)x T Bx 2ˇb( + ˇβ) x 2. 2x T [ Bx + Ag(x)] + trace[σ T (x)σ(x)] [µ 2ˇb( + ˇβ)] x 2,

12 and the conclusion (2.7) follows from Theorem 3. with Q = the identity matrix. The proof is complete. 4. Stabilization by Linear Stochastic Perturbation We know the neural network u(t) = Bu(t) + Ag(u(t)) may not stable sometimes. Perhaps one might imagine that an unstable neural network should behave even worse (more unstable) if the network subjects to stochastic perturbation. However, this is not always true. In fact, as every thing has two sides, stochastic perturbation may make the given unstable network nicer (stable). In this section we shall show that any neural network of form (.4) can be stabilized by stochastic perturbation. From the practical point of view we restrict ourselves to linear stochastic perturbation only. In other words we only consider the stochastic perturbation of the form σ(x(t))dw(t) = B k x(t)dw k (t), i.e. σ(x) = (B x, B 2 x,, B m x), where B k, k m are all n n matrices. In this case, the stochastically perturbed network (.6) becomes dx(t) = [ Bx(t) + Ag(x(t))]dt + B k x(t)dw k (t) on t 0, (4.) x(0) = x o R n. Note that and trace[σ T (x)qσ(x)] = x T Bk T QB k x x T Qσ(x)σ T (x)qx = trace[σ T (x)qxx T Qσ(x)] = x T Bk T Qxx T QB k x = (x T QB k x) 2. We immediately obtain the following useful result from Theorem 2.. Theorem 4. Assume there exists a symmetric positive definite matrix Q = (q ij ) n n and a pair of numbers µ R and ρ 0 such that 2x T Q[ Bx + Ag(x)] + x T Bk T QB k x µx T Qx

13 and (x T QB k x) 2 ρ(x T Qx) 2 for all x R n. Then the solution of equation (4.) satisfies t log( x(t; x o) ) (ρ µ 2 ) a.s. whenever x o 0. In particular, if ρ > µ/2 then the stochastic neural network (4.) is almost surely exponentially stable. Let us now explain through examples how one can apply this theorem to stabilize a given neural network. Example 4. Let B k = θ k I for k m, where I is the identity matrix and θ k, k m are all real numbers. Then equation (4.) becomes dx(t) = [ Bx(t) + Ag(x(t))]dt + θ k x(t)dw k (t) (4.2) (the initial data is omitted here). One can see that the numbers θ k, k m represent the intensity of the stochastic perturbation. Choose Q to be the identity matrix. Note in this case that and x T Bk T QB k x = (x T QB k x) 2 = Moreover, in view of (.2) we have B k x 2 = (x T θ k x) 2 = θk x 2 2 (4.3) θk x 2 4. (4.4) 2x T QAg(x) 2 x A g(x) 2 ˇβ A x 2, where ˇβ = max k n β k and denotes the operator norm of a matrix, i.e. A = sup{ Ax : x R n, x = }. Hence 2x T Q[ Bx + Ag(x)] 2( ˇβ ˆb) x 2, (4.5)

14 where ˆb = min k n b k. Combining (4.3) (4.5) and applying Theorem 4. we see that the solution of equation (4.2) satisfies t log( x(t; x o) ) ( 2 θk 2 ( ˇβ ˆb) ) a.s. whenever x o 0. In particular, if choose θ k s large enough such that θk 2 > 2( ˇβ ˆb) then the stochastic neural network (4.2) is almost surely exponentially stable. Now if we choose θ k = 0 for 2 k m, then equation (4.2) becomes an even simpler one dx(t) = [ Bx(t) + Ag(x(t))]dt + θ x(t)dw (t). (4.6) That is we only use a scalar Brownian motion as the source of stochastic perturbation. This stochastic network is almost surely exponentially stable provided θ 2 > 2( ˇβ ˆb). From this simple example we see that if a strong enough stochastic perturbation is added onto a neural network u(t) = Bu(t) + Ag(u(t)) in a certain way then the network can be stabilized. In other words we have already obtained the following theorem. Theorem 4.2 Any neural network of the form u(t) = Bu(t) + Ag(u(t)) can be stabilized by Brownian motion provided (.2) is satisfied. Moreover, one can even use only a scalar Brownian motion to do so. Theorem 4. ensures that there are many choices for the matrices B k in order to stabilize a given network. Of course the choices in Example 4. are just the simplest ones. For illustration one more example is given here. Example 4.2 that For each k, choose a positive definite n n matrix D k such x T D k x 3 2 D k x 2,

15 Obviously, there are lots of such matrices. Let θ be a real number and define B k = θd k. Then equation (4.) becomes dx(t) = [ Bx(t) + Ag(x(t))]dt + θ Again let Q = identity matrix. Note and x T Bk T QB k x = D k x(t)dw k (t). (4.7) θd k x 2 θ 2 m (x T QB k x) 2 = θ 2 (x T D k x) 2 3θ2 4 m D k 2 x 2 D k 2 x 4. Combining these together with (4.5) and then applying Theorem 4. we obtain that the solution of equation (4.7) satisfies ( θ 2 t log( x(t; x o) ) 4 D k 2 ( ˇβ ˆb) ) a.s. whenever x o 0. So if θ 2 > 4( ˇβ ˆb) ( m D k 2) then the stochastic network (4.7) is almost surely exponentially stable. From the above examples one can see that in order to stabilize an unstable network the linear stochastic perturbation should be strong enough. This is not surprising since if the stochastic perturbation is too weak it may not be able to change the instability property of the network. 5. Destabilization by Linear Stochastic Perturbation In the previous section we have discussed the stochastic stabilization problem. Let us now turn to consider the opposite problem stochastic destabilization. That is, we shall add stochastic perturbation onto a given stable network in the hope that the perturbed network becomes unstable. Obviously the stochastic perturbation should be strong enough otherwise the stability property will not be destroyed. However, the strength of the perturbation is not the only effect. As a matter of fact, the way how the stochastic perturbation is added onto the network is more important. As seen in the previous

16 section, sometimes, the stronger the stochastic perturbation is added the more stable the network becomes. From the practical point of view, we again restrict ourselves to linear stochastic perturbation only. In other words we still assume the stochastically perturbed network is described by equation (4.). Applying Theorem 3. to equation (4.) we immediately obtain the following useful result. Theorem 5. Assume there exists a symmetric positive definite matrix Q = (q ij ) n n and a pair of numbers µ R and ρ > 0 such that and 2x T Q[ Bx + Ag(x)] + x T Bk T QB k x µx T Qx (x T QB k x) 2 ρ(x T Qx) 2 for all x R n. Then the solution of equation (4.) satisfies lim inf t log( x(t; x o) ) µ 2 ρ whenever x o 0. In particular, if ρ < µ/2 then the stochastic neural network (4.) is almost surely exponentially unstable. Let us now apply this theorem to show how one can use stochastic perturbation to destabilize a given network. Example 5. First of all, let the dimension of the network n 3. Let m = n, i.e. choose an n-dimensional Brownian motion (w (t), w 2 (t),, w n (t)) T. Let θ be a real number. For each k =, 2,, n, define B k = (b kij ) n n by b kij = θ if i = k and j = k + or otherwise b kij = 0; and moreover define B n = (b nij ) n n by b nij = θ if i = n and j = or otherwise b kij = 0. Then the stochastic network (4.) becomes a.s. x 2 (t)dw (t) dx(t) = [ Bx(t) + Ag(x(t))]dt + θ. x n (t)dw n (t). (5.) x (t)dw n (t) Let Q = the identity matrix. Note x T Bk T QB k x = B k x 2 = θx k 2 = θ 2 x 2. (5.2)

17 Also, setting x n+ = x, 2θ2 3 n (x T QB k x) 2 = θ 2 x 2 kx 2 k+ x 2 kx 2 k+ + θ2 6 (x 4 k + x 4 k+) θ2 3 x 4. (5.3) Moreover, by (.2), 2x T Q[ Bx + Ag(x)] 2(ˇb + ˇβ A ) x 2, (5.4) where ˇb = max k n b k and ˇβ = max k n β k. Combining (5.2) (5.4) and then applying Theorem 5. we see that the solution of equation (5.) satisfies lim inf t log( x(t; x o) ) θ2 2 (ˇb + ˇβ A ) θ2 3 = θ2 6 (ˇb + ˇβ A ) a.s. whenever x o 0. So the stochastic neural network (5.) is almost surely exponentially unstable if θ 2 > 6(ˇb + ˇβ A ). Example 5.2 Secondly, let us consider the case when the dimension of the network n is an even number, say n = 2p (p ). Let m =, that is choose a scalar Brownian motion w (t). Let θ be a real number. Define 0 θ θ 0 B = 0 Then equation (4.) becomes... dx(t) = [ Bx(t) + Ag(x(t))]dt + θ Let Q = identity matrix again. Note 0 0 θ θ 0. x 2 (t) x (t). x 2p (t) x 2p (t) dw (t). (5.5) x T B T QB x = θ 2 x 2 and (x T QB x) 2 = 0. (5.6)

18 Combining (5.6) with (5.4) and then applying Theorem 5. we see that the solution of equation (5.5) satisfies lim inf t log( x(t; x o) ) θ2 2 (ˇb + ˇβ A ) a.s. whenever x o 0. So the stochastic neural network (5.5) is almost surely exponentially unstable if θ 2 > 2(ˇb + ˇβ A ). Summarizing the above two examples we obtain the following conclusion. Theorem 5.2 Any neural network of the form ẋ(t) = Bx(t) + Ag(x(t)) can be destabilized by Brownian motion provided the dimension n 2 and (.2) is satisfied. Naturally, one would ask what happens when the dimension n =. Although from the practical point of view one-dimensional networks are rare, the question needs to be answered for the completeness of theory. So let us consider a one-dimensional network u(t) = bu(t) + ag(u(t)), (5.7) where b > 0 and a = b or b, and g(u) is a sigmoidal real-valued function such that ug(u) 0 and g(u) β u for all < u <. Assume β <. Then it is easy to verify that the solution u(t; x o ) of equation (5.7) with initial data u(0) = x o 0 satisfies t log( u(t; x o) ) b [ β ( 0 sign(a) )] < 0. In other words, network (5.7) is exponentially stable. Now perturb this network stochastically and assume the perturbed network is described by dx(t) = [ bx(t) + ag(x(t))]dt + θ k x(t)dw k (t), (5.8)

19 where θ k s are all real unmbers. It is not difficult to show by Theorem 4. that the solution x(t; x o ) of equation (5.8) with initial data x(0) = x o 0 satisfies t log( x(t; x o) ) b [ β ( 0 sign(a) )] 2 θ 2 k < 0 So the stochastic neural network (5.8) becomes even more stable. We therefore see that a one-dimensional stable network may not be destabilized by Brownian motions if the stochastic perturbation is restricted to be linear. 6. Open Problems It has been showed that for any given unstable neural network of the form u(t) = Bu(t) + Ag(u(t)) (6.) satisfying (.2), one can always choose suitable matrices B, B 2,, B m such that the stochastically perturbed network dx(t) = [ Bx(t) + Ag(x(t))]dt + B k x(t)dw k (t) (6.2) is almost surely exponentially stable, and moreover the choices for such B k s are plenty. One the other hand, stabilization is expensive and the cost is generally proportional to m trace(b kbk T ). In practice, it is important to find the best B k s which minimize the cost. Let us now describe such problem in a strictly mathematical way. For each λ > 0 and m, denote by S λ,m the family of matrices (B, B 2,, B m ) such that the top Lyapunov exponent of the solution of equation (6.2) is not greater than λ. Obviously S λ,m is not empty. Define r λ,m = inf (B,,B m ) S λ,m trace(b k Bk T ). The first open problem is: Is there an optimal ( B,, B m ) S λ,m in the sense r λ,m = trace( B k BT k )? Now let S λ = m= S λ,m and r λ = inf{r λ,m : m < }. The second open problem is: Is there an optimal ( B,, B m ) S λ in the sense r λ = trace( B k BT k )? a.s.

20 Furthermore, define r = inf{r λ : λ > 0}. In the case when network (6.) is exponentially unstable, it is not very difficult to show r > 0. The mean of r is that if matrices (B, B 2,, B m ) for some m are such that trace(b k Bk T ) < r, then the stochastic network (6.2) is definitely not almost surely exponentially stable. Should m trace(b kbk T ) be called the intensity of the stochastic perturbation, then the intensity must not be less than r in order to stabilize the given network. So we can call r the minimum intensity of stochastic perturbation for stabilization. The question is: What is the value of r? Acknowledgement The authors would like to thank the Royal Society for the financial support so that X. Mao is able to invite X.X. Liao to visit the University of Strathclyde to carry out this joint research. REFERENCES [] Coben, M.A. and Crosshery S., Absolute stability and global pattern formation and patrolled memory storage by competitive neural networks, IEEE Trans. on Systems, Man and Cybernetics 3 (983), [2] Denker, J.S.(Editor), Neural Networks for Computing (Snowbird, UT, 986), Proceedings of the Conference on Neural Networks for Computing, AIP, New York, 986. [3] Friedman, A. Stochastic Differential Equations and Applications, Academic Press, Vol., 975. [4] Hopfield, J.J., Neural networks and physical systems with emergent collect computational abilities, Proc. Natl. Acad. Sci. USA, 79(982), [5] Hopfield, J.J., Neurons with graded response have collective computational properties like those of two-state neurons, Proc. Natl. Acad. Sci. USA, 8(984), [6] Hopfield, J.J. and Tank, D.W., Computing with neural circuits, Model Science, 233(986), [7] Liao, X.X., Stability of a class of nonlinear continuous neural networks, Proceedings of the First World Conference on Nonlinear Analysis, WC33, 992. [8] Liptser, R.Sh. and Shiryayev, A.N., Theory of Martingales, Kluwer Academic Publishers, 986.

21 [9] Mao, X., Exponential Stability of Stochastic Differential Equations, Marcel Dekker Inc., 994. [0] Métivier, M., Semimartingales, Walter de Gruyter, 982. [] Quezz, A., Protoposecu V. and Barben, J., On the stability storage capacity and design of nonlinear continuous neural networks, IEEE Trans. on Systems, Man and Cybernetics 8 (983),

Stability of Stochastic Differential Equations

Stability of Stochastic Differential Equations Lyapunov stability theory for ODEs s Stability of Stochastic Differential Equations Part 1: Introduction Department of Mathematics and Statistics University of Strathclyde Glasgow, G1 1XH December 2010

More information

Asymptotic behaviour of the stochastic Lotka Volterra model

Asymptotic behaviour of the stochastic Lotka Volterra model J. Math. Anal. Appl. 87 3 56 www.elsevier.com/locate/jmaa Asymptotic behaviour of the stochastic Lotka Volterra model Xuerong Mao, Sotirios Sabanis, and Eric Renshaw Department of Statistics and Modelling

More information

Noise-to-State Stability for Stochastic Hopfield Neural Networks with Delays

Noise-to-State Stability for Stochastic Hopfield Neural Networks with Delays Noise-to-State Stability for Stochastic Hopfield Neural Networks with Delays Yuli Fu and Xurong Mao Department of Statistics and Modelling Science,University of Strathclyde Glasgow G1 1XH,Scotland, UK

More information

Uniformly Uniformly-ergodic Markov chains and BSDEs

Uniformly Uniformly-ergodic Markov chains and BSDEs Uniformly Uniformly-ergodic Markov chains and BSDEs Samuel N. Cohen Mathematical Institute, University of Oxford (Based on joint work with Ying Hu, Robert Elliott, Lukas Szpruch) Centre Henri Lebesgue,

More information

Research Article Mean Square Stability of Impulsive Stochastic Differential Systems

Research Article Mean Square Stability of Impulsive Stochastic Differential Systems International Differential Equations Volume 011, Article ID 613695, 13 pages doi:10.1155/011/613695 Research Article Mean Square Stability of Impulsive Stochastic Differential Systems Shujie Yang, Bao

More information

IN THIS PAPER, we consider a class of continuous-time recurrent

IN THIS PAPER, we consider a class of continuous-time recurrent IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: EXPRESS BRIEFS, VOL. 51, NO. 4, APRIL 2004 161 Global Output Convergence of a Class of Continuous-Time Recurrent Neural Networks With Time-Varying Thresholds

More information

(2m)-TH MEAN BEHAVIOR OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS UNDER PARAMETRIC PERTURBATIONS

(2m)-TH MEAN BEHAVIOR OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS UNDER PARAMETRIC PERTURBATIONS (2m)-TH MEAN BEHAVIOR OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS UNDER PARAMETRIC PERTURBATIONS Svetlana Janković and Miljana Jovanović Faculty of Science, Department of Mathematics, University

More information

Applied Mathematics Letters. Stationary distribution, ergodicity and extinction of a stochastic generalized logistic system

Applied Mathematics Letters. Stationary distribution, ergodicity and extinction of a stochastic generalized logistic system Applied Mathematics Letters 5 (1) 198 1985 Contents lists available at SciVerse ScienceDirect Applied Mathematics Letters journal homepage: www.elsevier.com/locate/aml Stationary distribution, ergodicity

More information

Stochastic Differential Equations.

Stochastic Differential Equations. Chapter 3 Stochastic Differential Equations. 3.1 Existence and Uniqueness. One of the ways of constructing a Diffusion process is to solve the stochastic differential equation dx(t) = σ(t, x(t)) dβ(t)

More information

ON THE PATHWISE UNIQUENESS OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS

ON THE PATHWISE UNIQUENESS OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS PORTUGALIAE MATHEMATICA Vol. 55 Fasc. 4 1998 ON THE PATHWISE UNIQUENESS OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS C. Sonoc Abstract: A sufficient condition for uniqueness of solutions of ordinary

More information

Topic # /31 Feedback Control Systems. Analysis of Nonlinear Systems Lyapunov Stability Analysis

Topic # /31 Feedback Control Systems. Analysis of Nonlinear Systems Lyapunov Stability Analysis Topic # 16.30/31 Feedback Control Systems Analysis of Nonlinear Systems Lyapunov Stability Analysis Fall 010 16.30/31 Lyapunov Stability Analysis Very general method to prove (or disprove) stability of

More information

Introduction to Nonlinear Control Lecture # 3 Time-Varying and Perturbed Systems

Introduction to Nonlinear Control Lecture # 3 Time-Varying and Perturbed Systems p. 1/5 Introduction to Nonlinear Control Lecture # 3 Time-Varying and Perturbed Systems p. 2/5 Time-varying Systems ẋ = f(t, x) f(t, x) is piecewise continuous in t and locally Lipschitz in x for all t

More information

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt.

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt. The concentration of a drug in blood Exponential decay C12 concentration 2 4 6 8 1 C12 concentration 2 4 6 8 1 dc(t) dt = µc(t) C(t) = C()e µt 2 4 6 8 1 12 time in minutes 2 4 6 8 1 12 time in minutes

More information

On a class of stochastic differential equations in a financial network model

On a class of stochastic differential equations in a financial network model 1 On a class of stochastic differential equations in a financial network model Tomoyuki Ichiba Department of Statistics & Applied Probability, Center for Financial Mathematics and Actuarial Research, University

More information

1 Lyapunov theory of stability

1 Lyapunov theory of stability M.Kawski, APM 581 Diff Equns Intro to Lyapunov theory. November 15, 29 1 1 Lyapunov theory of stability Introduction. Lyapunov s second (or direct) method provides tools for studying (asymptotic) stability

More information

Lecture 22 Girsanov s Theorem

Lecture 22 Girsanov s Theorem Lecture 22: Girsanov s Theorem of 8 Course: Theory of Probability II Term: Spring 25 Instructor: Gordan Zitkovic Lecture 22 Girsanov s Theorem An example Consider a finite Gaussian random walk X n = n

More information

Stationary distribution and pathwise estimation of n-species mutualism system with stochastic perturbation

Stationary distribution and pathwise estimation of n-species mutualism system with stochastic perturbation Available online at www.tjnsa.com J. Nonlinear Sci. Appl. 9 6), 936 93 Research Article Stationary distribution and pathwise estimation of n-species mutualism system with stochastic perturbation Weiwei

More information

Balanced Truncation 1

Balanced Truncation 1 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.242, Fall 2004: MODEL REDUCTION Balanced Truncation This lecture introduces balanced truncation for LTI

More information

Some Properties of NSFDEs

Some Properties of NSFDEs Chenggui Yuan (Swansea University) Some Properties of NSFDEs 1 / 41 Some Properties of NSFDEs Chenggui Yuan Swansea University Chenggui Yuan (Swansea University) Some Properties of NSFDEs 2 / 41 Outline

More information

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t 2.2 Filtrations Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of σ algebras {F t } such that F t F and F t F t+1 for all t = 0, 1,.... In continuous time, the second condition

More information

OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS

OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS APPLICATIONES MATHEMATICAE 29,4 (22), pp. 387 398 Mariusz Michta (Zielona Góra) OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS Abstract. A martingale problem approach is used first to analyze

More information

Backward Stochastic Differential Equations with Infinite Time Horizon

Backward Stochastic Differential Equations with Infinite Time Horizon Backward Stochastic Differential Equations with Infinite Time Horizon Holger Metzler PhD advisor: Prof. G. Tessitore Università di Milano-Bicocca Spring School Stochastic Control in Finance Roscoff, March

More information

On Stochastic Adaptive Control & its Applications. Bozenna Pasik-Duncan University of Kansas, USA

On Stochastic Adaptive Control & its Applications. Bozenna Pasik-Duncan University of Kansas, USA On Stochastic Adaptive Control & its Applications Bozenna Pasik-Duncan University of Kansas, USA ASEAS Workshop, AFOSR, 23-24 March, 2009 1. Motivation: Work in the 1970's 2. Adaptive Control of Continuous

More information

Nonlinear Control. Nonlinear Control Lecture # 3 Stability of Equilibrium Points

Nonlinear Control. Nonlinear Control Lecture # 3 Stability of Equilibrium Points Nonlinear Control Lecture # 3 Stability of Equilibrium Points The Invariance Principle Definitions Let x(t) be a solution of ẋ = f(x) A point p is a positive limit point of x(t) if there is a sequence

More information

Lyapunov Stability Theory

Lyapunov Stability Theory Lyapunov Stability Theory Peter Al Hokayem and Eduardo Gallestey March 16, 2015 1 Introduction In this lecture we consider the stability of equilibrium points of autonomous nonlinear systems, both in continuous

More information

Properties of an infinite dimensional EDS system : the Muller s ratchet

Properties of an infinite dimensional EDS system : the Muller s ratchet Properties of an infinite dimensional EDS system : the Muller s ratchet LATP June 5, 2011 A ratchet source : wikipedia Plan 1 Introduction : The model of Haigh 2 3 Hypothesis (Biological) : The population

More information

Research Article Delay-Dependent Exponential Stability for Discrete-Time BAM Neural Networks with Time-Varying Delays

Research Article Delay-Dependent Exponential Stability for Discrete-Time BAM Neural Networks with Time-Varying Delays Discrete Dynamics in Nature and Society Volume 2008, Article ID 421614, 14 pages doi:10.1155/2008/421614 Research Article Delay-Dependent Exponential Stability for Discrete-Time BAM Neural Networks with

More information

1. Find the solution of the following uncontrolled linear system. 2 α 1 1

1. Find the solution of the following uncontrolled linear system. 2 α 1 1 Appendix B Revision Problems 1. Find the solution of the following uncontrolled linear system 0 1 1 ẋ = x, x(0) =. 2 3 1 Class test, August 1998 2. Given the linear system described by 2 α 1 1 ẋ = x +

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

Time-delay feedback control in a delayed dynamical chaos system and its applications

Time-delay feedback control in a delayed dynamical chaos system and its applications Time-delay feedback control in a delayed dynamical chaos system and its applications Ye Zhi-Yong( ), Yang Guang( ), and Deng Cun-Bing( ) School of Mathematics and Physics, Chongqing University of Technology,

More information

Solution of Linear State-space Systems

Solution of Linear State-space Systems Solution of Linear State-space Systems Homogeneous (u=0) LTV systems first Theorem (Peano-Baker series) The unique solution to x(t) = (t, )x 0 where The matrix function is given by is called the state

More information

The ϵ-capacity of a gain matrix and tolerable disturbances: Discrete-time perturbed linear systems

The ϵ-capacity of a gain matrix and tolerable disturbances: Discrete-time perturbed linear systems IOSR Journal of Mathematics (IOSR-JM) e-issn: 2278-5728, p-issn: 2319-765X. Volume 11, Issue 3 Ver. IV (May - Jun. 2015), PP 52-62 www.iosrjournals.org The ϵ-capacity of a gain matrix and tolerable disturbances:

More information

THEOREM OF OSELEDETS. We recall some basic facts and terminology relative to linear cocycles and the multiplicative ergodic theorem of Oseledets [1].

THEOREM OF OSELEDETS. We recall some basic facts and terminology relative to linear cocycles and the multiplicative ergodic theorem of Oseledets [1]. THEOREM OF OSELEDETS We recall some basic facts and terminology relative to linear cocycles and the multiplicative ergodic theorem of Oseledets []. 0.. Cocycles over maps. Let µ be a probability measure

More information

Math Ordinary Differential Equations

Math Ordinary Differential Equations Math 411 - Ordinary Differential Equations Review Notes - 1 1 - Basic Theory A first order ordinary differential equation has the form x = f(t, x) (11) Here x = dx/dt Given an initial data x(t 0 ) = x

More information

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( ) Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca November 26th, 2014 E. Tanré (INRIA - Team Tosca) Mathematical

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

On the Periodic Solutions of Certain Fifth Order Nonlinear Vector Differential Equations

On the Periodic Solutions of Certain Fifth Order Nonlinear Vector Differential Equations On the Periodic Solutions of Certain Fifth Order Nonlinear Vector Differential Equations Melike Karta Department of Mathematics, Faculty of Science and Arts, Agri Ibrahim Cecen University, Agri, E-mail:

More information

ON THE ASYMPTOTIC STABILITY IN TERMS OF TWO MEASURES FOR FUNCTIONAL DIFFERENTIAL EQUATIONS. G. Makay

ON THE ASYMPTOTIC STABILITY IN TERMS OF TWO MEASURES FOR FUNCTIONAL DIFFERENTIAL EQUATIONS. G. Makay ON THE ASYMPTOTIC STABILITY IN TERMS OF TWO MEASURES FOR FUNCTIONAL DIFFERENTIAL EQUATIONS G. Makay Student in Mathematics, University of Szeged, Szeged, H-6726, Hungary Key words and phrases: Lyapunov

More information

LMI Methods in Optimal and Robust Control

LMI Methods in Optimal and Robust Control LMI Methods in Optimal and Robust Control Matthew M. Peet Arizona State University Lecture 15: Nonlinear Systems and Lyapunov Functions Overview Our next goal is to extend LMI s and optimization to nonlinear

More information

The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T

The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T 1 1 Linear Systems The goal of this chapter is to study linear systems of ordinary differential equations: ẋ = Ax, x(0) = x 0, (1) where x R n, A is an n n matrix and ẋ = dx ( dt = dx1 dt,..., dx ) T n.

More information

Stabilization of persistently excited linear systems

Stabilization of persistently excited linear systems Stabilization of persistently excited linear systems Yacine Chitour Laboratoire des signaux et systèmes & Université Paris-Sud, Orsay Exposé LJLL Paris, 28/9/2012 Stabilization & intermittent control Consider

More information

MA8109 Stochastic Processes in Systems Theory Autumn 2013

MA8109 Stochastic Processes in Systems Theory Autumn 2013 Norwegian University of Science and Technology Department of Mathematical Sciences MA819 Stochastic Processes in Systems Theory Autumn 213 1 MA819 Exam 23, problem 3b This is a linear equation of the form

More information

Examples include: (a) the Lorenz system for climate and weather modeling (b) the Hodgkin-Huxley system for neuron modeling

Examples include: (a) the Lorenz system for climate and weather modeling (b) the Hodgkin-Huxley system for neuron modeling 1 Introduction Many natural processes can be viewed as dynamical systems, where the system is represented by a set of state variables and its evolution governed by a set of differential equations. Examples

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

DISSIPATIVITY OF NEURAL NETWORKS WITH CONTINUOUSLY DISTRIBUTED DELAYS

DISSIPATIVITY OF NEURAL NETWORKS WITH CONTINUOUSLY DISTRIBUTED DELAYS Electronic Journal of Differential Equations, Vol. 26(26), No. 119, pp. 1 7. ISSN: 172-6691. URL: http://ejde.math.txstate.edu or http://ejde.math.unt.edu ftp ejde.math.txstate.edu (login: ftp) DISSIPATIVITY

More information

ERRATA: Probabilistic Techniques in Analysis

ERRATA: Probabilistic Techniques in Analysis ERRATA: Probabilistic Techniques in Analysis ERRATA 1 Updated April 25, 26 Page 3, line 13. A 1,..., A n are independent if P(A i1 A ij ) = P(A 1 ) P(A ij ) for every subset {i 1,..., i j } of {1,...,

More information

154 Chapter 9 Hints, Answers, and Solutions The particular trajectories are highlighted in the phase portraits below.

154 Chapter 9 Hints, Answers, and Solutions The particular trajectories are highlighted in the phase portraits below. 54 Chapter 9 Hints, Answers, and Solutions 9. The Phase Plane 9.. 4. The particular trajectories are highlighted in the phase portraits below... 3. 4. 9..5. Shown below is one possibility with x(t) and

More information

n E(X t T n = lim X s Tn = X s

n E(X t T n = lim X s Tn = X s Stochastic Calculus Example sheet - Lent 15 Michael Tehranchi Problem 1. Let X be a local martingale. Prove that X is a uniformly integrable martingale if and only X is of class D. Solution 1. If If direction:

More information

Markov Chain BSDEs and risk averse networks

Markov Chain BSDEs and risk averse networks Markov Chain BSDEs and risk averse networks Samuel N. Cohen Mathematical Institute, University of Oxford (Based on joint work with Ying Hu, Robert Elliott, Lukas Szpruch) 2nd Young Researchers in BSDEs

More information

Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control

Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control Chun-Hua Guo Dedicated to Peter Lancaster on the occasion of his 70th birthday We consider iterative methods for finding the

More information

Chapter III. Stability of Linear Systems

Chapter III. Stability of Linear Systems 1 Chapter III Stability of Linear Systems 1. Stability and state transition matrix 2. Time-varying (non-autonomous) systems 3. Time-invariant systems 1 STABILITY AND STATE TRANSITION MATRIX 2 In this chapter,

More information

ELEMENTS OF PROBABILITY THEORY

ELEMENTS OF PROBABILITY THEORY ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable

More information

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 1 Entropy Since this course is about entropy maximization,

More information

Nonlinear Control. Nonlinear Control Lecture # 8 Time Varying and Perturbed Systems

Nonlinear Control. Nonlinear Control Lecture # 8 Time Varying and Perturbed Systems Nonlinear Control Lecture # 8 Time Varying and Perturbed Systems Time-varying Systems ẋ = f(t,x) f(t,x) is piecewise continuous in t and locally Lipschitz in x for all t 0 and all x D, (0 D). The origin

More information

(2014) A 51 (1) ISSN

(2014) A 51 (1) ISSN Mao, Xuerong and Song, Qingshuo and Yang, Dichuan (204) A note on exponential almost sure stability of stochastic differential equation. Bulletin of the Korean Mathematical Society, 5 (). pp. 22-227. ISSN

More information

Analysis of stability for impulsive stochastic fuzzy Cohen-Grossberg neural networks with mixed delays

Analysis of stability for impulsive stochastic fuzzy Cohen-Grossberg neural networks with mixed delays Analysis of stability for impulsive stochastic fuzzy Cohen-Grossberg neural networks with mixed delays Qianhong Zhang Guizhou University of Finance and Economics Guizhou Key Laboratory of Economics System

More information

LAN property for sde s with additive fractional noise and continuous time observation

LAN property for sde s with additive fractional noise and continuous time observation LAN property for sde s with additive fractional noise and continuous time observation Eulalia Nualart (Universitat Pompeu Fabra, Barcelona) joint work with Samy Tindel (Purdue University) Vlad s 6th birthday,

More information

Solving TSP Using Lotka-Volterra Neural Networks without Self-Excitatory

Solving TSP Using Lotka-Volterra Neural Networks without Self-Excitatory Solving TSP Using Lotka-Volterra Neural Networks without Self-Excitatory Manli Li, Jiali Yu, Stones Lei Zhang, Hong Qu Computational Intelligence Laboratory, School of Computer Science and Engineering,

More information

Nonlinear stabilization via a linear observability

Nonlinear stabilization via a linear observability via a linear observability Kaïs Ammari Department of Mathematics University of Monastir Joint work with Fathia Alabau-Boussouira Collocated feedback stabilization Outline 1 Introduction and main result

More information

Generalized Gaussian Bridges of Prediction-Invertible Processes

Generalized Gaussian Bridges of Prediction-Invertible Processes Generalized Gaussian Bridges of Prediction-Invertible Processes Tommi Sottinen 1 and Adil Yazigi University of Vaasa, Finland Modern Stochastics: Theory and Applications III September 1, 212, Kyiv, Ukraine

More information

Ordinary Differential Equations II

Ordinary Differential Equations II Ordinary Differential Equations II February 9 217 Linearization of an autonomous system We consider the system (1) x = f(x) near a fixed point x. As usual f C 1. Without loss of generality we assume x

More information

Handout 2: Invariant Sets and Stability

Handout 2: Invariant Sets and Stability Engineering Tripos Part IIB Nonlinear Systems and Control Module 4F2 1 Invariant Sets Handout 2: Invariant Sets and Stability Consider again the autonomous dynamical system ẋ = f(x), x() = x (1) with state

More information

Integral representations in models with long memory

Integral representations in models with long memory Integral representations in models with long memory Georgiy Shevchenko, Yuliya Mishura, Esko Valkeila, Lauri Viitasaari, Taras Shalaiko Taras Shevchenko National University of Kyiv 29 September 215, Ulm

More information

HOPFIELD neural networks (HNNs) are a class of nonlinear

HOPFIELD neural networks (HNNs) are a class of nonlinear IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: EXPRESS BRIEFS, VOL. 52, NO. 4, APRIL 2005 213 Stochastic Noise Process Enhancement of Hopfield Neural Networks Vladimir Pavlović, Member, IEEE, Dan Schonfeld,

More information

Lecture 12: Diffusion Processes and Stochastic Differential Equations

Lecture 12: Diffusion Processes and Stochastic Differential Equations Lecture 12: Diffusion Processes and Stochastic Differential Equations 1. Diffusion Processes 1.1 Definition of a diffusion process 1.2 Examples 2. Stochastic Differential Equations SDE) 2.1 Stochastic

More information

Some Aspects of Universal Portfolio

Some Aspects of Universal Portfolio 1 Some Aspects of Universal Portfolio Tomoyuki Ichiba (UC Santa Barbara) joint work with Marcel Brod (ETH Zurich) Conference on Stochastic Asymptotics & Applications Sixth Western Conference on Mathematical

More information

Minimum-Phase Property of Nonlinear Systems in Terms of a Dissipation Inequality

Minimum-Phase Property of Nonlinear Systems in Terms of a Dissipation Inequality Minimum-Phase Property of Nonlinear Systems in Terms of a Dissipation Inequality Christian Ebenbauer Institute for Systems Theory in Engineering, University of Stuttgart, 70550 Stuttgart, Germany ce@ist.uni-stuttgart.de

More information

Stochastic integration. P.J.C. Spreij

Stochastic integration. P.J.C. Spreij Stochastic integration P.J.C. Spreij this version: April 22, 29 Contents 1 Stochastic processes 1 1.1 General theory............................... 1 1.2 Stopping times...............................

More information

BIOCOMP 11 Stability Analysis of Hybrid Stochastic Gene Regulatory Networks

BIOCOMP 11 Stability Analysis of Hybrid Stochastic Gene Regulatory Networks BIOCOMP 11 Stability Analysis of Hybrid Stochastic Gene Regulatory Networks Anke Meyer-Baese, Claudia Plant a, Jan Krumsiek, Fabian Theis b, Marc R. Emmett c and Charles A. Conrad d a Department of Scientific

More information

from a signal space into another signal space and described by: ±(u) < : _x(t) = x(t) +ff (n) (Ax(t) +B u(t)) + B 2u(t) y(t) = Cx(t) x(t 0) = x 0 (2)

from a signal space into another signal space and described by: ±(u) < : _x(t) = x(t) +ff (n) (Ax(t) +B u(t)) + B 2u(t) y(t) = Cx(t) x(t 0) = x 0 (2) Lipschitz continuous neural networks on L p V. Fromion LASB, INRA Montpellier, 2 place Viala, 34060 Montpellier, France. fromion@ensam.inra.fr Abstract This paper presents simple conditions ensuring that

More information

Stochastic differential equation models in biology Susanne Ditlevsen

Stochastic differential equation models in biology Susanne Ditlevsen Stochastic differential equation models in biology Susanne Ditlevsen Introduction This chapter is concerned with continuous time processes, which are often modeled as a system of ordinary differential

More information

CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems

CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems CDS 101 1. For each of the following linear systems, determine whether the origin is asymptotically stable and, if so, plot the step response and frequency response for the system. If there are multiple

More information

On a non-autonomous stochastic Lotka-Volterra competitive system

On a non-autonomous stochastic Lotka-Volterra competitive system Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 7), 399 38 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa On a non-autonomous stochastic Lotka-Volterra

More information

The multidimensional Ito Integral and the multidimensional Ito Formula. Eric Mu ller June 1, 2015 Seminar on Stochastic Geometry and its applications

The multidimensional Ito Integral and the multidimensional Ito Formula. Eric Mu ller June 1, 2015 Seminar on Stochastic Geometry and its applications The multidimensional Ito Integral and the multidimensional Ito Formula Eric Mu ller June 1, 215 Seminar on Stochastic Geometry and its applications page 2 Seminar on Stochastic Geometry and its applications

More information

EXPONENTIAL MEAN-SQUARE STABILITY OF NUMERICAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL EQUATIONS

EXPONENTIAL MEAN-SQUARE STABILITY OF NUMERICAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL EQUATIONS London Mathematical Society ISSN 1461 157 EXPONENTIAL MEAN-SQUARE STABILITY OF NUMERICAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL EQUATIONS DESMOND J. HIGHAM, XUERONG MAO and ANDREW M. STUART Abstract Positive

More information

Almost sure limit theorems for U-statistics

Almost sure limit theorems for U-statistics Almost sure limit theorems for U-statistics Hajo Holzmann, Susanne Koch and Alesey Min 3 Institut für Mathematische Stochasti Georg-August-Universität Göttingen Maschmühlenweg 8 0 37073 Göttingen Germany

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

Using Lyapunov Theory I

Using Lyapunov Theory I Lecture 33 Stability Analysis of Nonlinear Systems Using Lyapunov heory I Dr. Radhakant Padhi Asst. Professor Dept. of Aerospace Engineering Indian Institute of Science - Bangalore Outline Motivation Definitions

More information

Asymptotic Properties of an Approximate Maximum Likelihood Estimator for Stochastic PDEs

Asymptotic Properties of an Approximate Maximum Likelihood Estimator for Stochastic PDEs Asymptotic Properties of an Approximate Maximum Likelihood Estimator for Stochastic PDEs M. Huebner S. Lototsky B.L. Rozovskii In: Yu. M. Kabanov, B. L. Rozovskii, and A. N. Shiryaev editors, Statistics

More information

STOCHASTIC DIFFERENTIAL EQUATIONS AND APPLICATIONS Second Edition

STOCHASTIC DIFFERENTIAL EQUATIONS AND APPLICATIONS Second Edition STOCHASTIC DIFFERENTIAL EQUATIONS AND APPLICATIONS Second Edition ABOUT OUR AUTHOR Professor Xuerong Mao was born in 1957 in the city of Fuzhou in the province of Fujian, Peoples Republic of China. After

More information

Brownian Motion. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Brownian Motion

Brownian Motion. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Brownian Motion Brownian Motion An Undergraduate Introduction to Financial Mathematics J. Robert Buchanan 2010 Background We have already seen that the limiting behavior of a discrete random walk yields a derivation of

More information

LYAPUNOV-RAZUMIKHIN METHOD FOR DIFFERENTIAL EQUATIONS WITH PIECEWISE CONSTANT ARGUMENT. Marat Akhmet. Duygu Aruğaslan

LYAPUNOV-RAZUMIKHIN METHOD FOR DIFFERENTIAL EQUATIONS WITH PIECEWISE CONSTANT ARGUMENT. Marat Akhmet. Duygu Aruğaslan DISCRETE AND CONTINUOUS doi:10.3934/dcds.2009.25.457 DYNAMICAL SYSTEMS Volume 25, Number 2, October 2009 pp. 457 466 LYAPUNOV-RAZUMIKHIN METHOD FOR DIFFERENTIAL EQUATIONS WITH PIECEWISE CONSTANT ARGUMENT

More information

9. Banach algebras and C -algebras

9. Banach algebras and C -algebras matkt@imf.au.dk Institut for Matematiske Fag Det Naturvidenskabelige Fakultet Aarhus Universitet September 2005 We read in W. Rudin: Functional Analysis Based on parts of Chapter 10 and parts of Chapter

More information

Modeling & Control of Hybrid Systems Chapter 4 Stability

Modeling & Control of Hybrid Systems Chapter 4 Stability Modeling & Control of Hybrid Systems Chapter 4 Stability Overview 1. Switched systems 2. Lyapunov theory for smooth and linear systems 3. Stability for any switching signal 4. Stability for given switching

More information

Nonlinear Systems Theory

Nonlinear Systems Theory Nonlinear Systems Theory Matthew M. Peet Arizona State University Lecture 2: Nonlinear Systems Theory Overview Our next goal is to extend LMI s and optimization to nonlinear systems analysis. Today we

More information

Lecture 8 : Eigenvalues and Eigenvectors

Lecture 8 : Eigenvalues and Eigenvectors CPS290: Algorithmic Foundations of Data Science February 24, 2017 Lecture 8 : Eigenvalues and Eigenvectors Lecturer: Kamesh Munagala Scribe: Kamesh Munagala Hermitian Matrices It is simpler to begin with

More information

Brownian Motion and Stochastic Calculus

Brownian Motion and Stochastic Calculus ETHZ, Spring 17 D-MATH Prof Dr Martin Larsson Coordinator A Sepúlveda Brownian Motion and Stochastic Calculus Exercise sheet 6 Please hand in your solutions during exercise class or in your assistant s

More information

Chapter 2 Mathematical Preliminaries

Chapter 2 Mathematical Preliminaries Chapter 2 Mathematical Preliminaries 2.1 Introduction In this chapter, the fundamental mathematical concepts and analysis tools in systems theory are summarized, which will be used in control design and

More information

Boundedly complete weak-cauchy basic sequences in Banach spaces with the PCP

Boundedly complete weak-cauchy basic sequences in Banach spaces with the PCP Journal of Functional Analysis 253 (2007) 772 781 www.elsevier.com/locate/jfa Note Boundedly complete weak-cauchy basic sequences in Banach spaces with the PCP Haskell Rosenthal Department of Mathematics,

More information

Applied Math Qualifying Exam 11 October Instructions: Work 2 out of 3 problems in each of the 3 parts for a total of 6 problems.

Applied Math Qualifying Exam 11 October Instructions: Work 2 out of 3 problems in each of the 3 parts for a total of 6 problems. Printed Name: Signature: Applied Math Qualifying Exam 11 October 2014 Instructions: Work 2 out of 3 problems in each of the 3 parts for a total of 6 problems. 2 Part 1 (1) Let Ω be an open subset of R

More information

AC&ST AUTOMATIC CONTROL AND SYSTEM THEORY SYSTEMS AND MODELS. Claudio Melchiorri

AC&ST AUTOMATIC CONTROL AND SYSTEM THEORY SYSTEMS AND MODELS. Claudio Melchiorri C. Melchiorri (DEI) Automatic Control & System Theory 1 AUTOMATIC CONTROL AND SYSTEM THEORY SYSTEMS AND MODELS Claudio Melchiorri Dipartimento di Ingegneria dell Energia Elettrica e dell Informazione (DEI)

More information

A Globally Stabilizing Receding Horizon Controller for Neutrally Stable Linear Systems with Input Constraints 1

A Globally Stabilizing Receding Horizon Controller for Neutrally Stable Linear Systems with Input Constraints 1 A Globally Stabilizing Receding Horizon Controller for Neutrally Stable Linear Systems with Input Constraints 1 Ali Jadbabaie, Claudio De Persis, and Tae-Woong Yoon 2 Department of Electrical Engineering

More information

Exponential martingales: uniform integrability results and applications to point processes

Exponential martingales: uniform integrability results and applications to point processes Exponential martingales: uniform integrability results and applications to point processes Alexander Sokol Department of Mathematical Sciences, University of Copenhagen 26 September, 2012 1 / 39 Agenda

More information

4th Preparation Sheet - Solutions

4th Preparation Sheet - Solutions Prof. Dr. Rainer Dahlhaus Probability Theory Summer term 017 4th Preparation Sheet - Solutions Remark: Throughout the exercise sheet we use the two equivalent definitions of separability of a metric space

More information

Continuous methods for numerical linear algebra problems

Continuous methods for numerical linear algebra problems Continuous methods for numerical linear algebra problems Li-Zhi Liao (http://www.math.hkbu.edu.hk/ liliao) Department of Mathematics Hong Kong Baptist University The First International Summer School on

More information

Optimization of Quadratic Forms: NP Hard Problems : Neural Networks

Optimization of Quadratic Forms: NP Hard Problems : Neural Networks 1 Optimization of Quadratic Forms: NP Hard Problems : Neural Networks Garimella Rama Murthy, Associate Professor, International Institute of Information Technology, Gachibowli, HYDERABAD, AP, INDIA ABSTRACT

More information

Mathematics for Engineers. Numerical mathematics

Mathematics for Engineers. Numerical mathematics Mathematics for Engineers Numerical mathematics Integers Determine the largest representable integer with the intmax command. intmax ans = int32 2147483647 2147483647+1 ans = 2.1475e+09 Remark The set

More information

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

Jim Lambers MAT 610 Summer Session Lecture 2 Notes Jim Lambers MAT 610 Summer Session 2009-10 Lecture 2 Notes These notes correspond to Sections 2.2-2.4 in the text. Vector Norms Given vectors x and y of length one, which are simply scalars x and y, the

More information

WEIGHTED LIMITS FOR POINCARÉ DIFFERENCE EQUATIONS. Rotchana Chieocan

WEIGHTED LIMITS FOR POINCARÉ DIFFERENCE EQUATIONS. Rotchana Chieocan WEIGHTED LIMITS FOR POINCARÉ DIFFERENCE EQUATIONS Rotchana Chieocan Department of Mathematics, Faculty of Science, Khon Kaen University, 40002 Khon Kaen, Thail rotchana@kku.ac.th Mihály Pituk* Department

More information

Balancing of Lossless and Passive Systems

Balancing of Lossless and Passive Systems Balancing of Lossless and Passive Systems Arjan van der Schaft Abstract Different balancing techniques are applied to lossless nonlinear systems, with open-loop balancing applied to their scattering representation.

More information