International Journal of Automation and Computing 8(1), February 2011, 128-133 DOI: 10.1007/s11633-010-0564-y New Stability Criteria for Recurrent Neural Networks with a Time-varying Delay Hong-Bing Zeng 1, 2 Shen-Ping Xiao 1 Bin Liu 1, 3 1 School of Electrical and Information Engineering, Hunan University of Technology, Zhuzhou 412008, PRC 2 School of Information Science and Engineering, Central South University, Changsha 410083, PRC 3 School of Engineering, Australian National University, ACT 0200, Australia Abstract: This paper deals with the stability of static recurrent neural networks (RNNs) with a time-varying delay. An augmented Lyapunov-Krasovskii functional is employed, in which some useful terms are included. Furthermore, the relationship among the timevarying delay, its upper bound and their difference, is taken into account, and novel bounding techniques for 1 τ(t) are employed. As a result, without ignoring any useful term in the derivative of the Lyapunov-Krasovskii functional, the resulting delay-dependent criteria show less conservative than the existing ones. Finally, a numerical example is given to demonstrate the effectiveness of the proposed methods. Stability, recurrent neural networks (RNNs), time-varying delay, delay-dependent, augmented Lyapunov-Krasovskii func- Keywords: tional. 1 Introduction During the past decades, recurrent neural networks (RNNs) have found many successful applications in signal processing, pattern classification, realizing associative memories, solving certain optimization problems, etc. Therefore, the study of RNNs has received considerable attention during the past decades, and various issues of neural networks have been investigated 1 4. Because the integration and communication delays are unavoidably encountered in neural networks, often constituting a source of instability and oscillations, considerable attention has been focused on the stability problem of neural networks with time delays 5 17. Based on the difference of basic variables (local field states or neuron states), RNNs can be classified as local field networks and static neural networks 18. The latter have received little attention, and only few stability results can be found for them, while the former, as reviewed above, have been studied thoroughly. A robust exponential stability result was given for the neural network without a delay in 19, where a linear matrix inequality (LMI) approach was employed. Recently, the LMI technique was extended to the static neural network with a constant delay 20 and with a time-varying delay 21, where sufficient conditions were obtained guaranteeing the global asymptotic stability of the neural network, respectively. Nevertheless, for the static neural network with a time-varying delay, the results are considerably conservative. By employing a new Lyapunov- Krasovskii functional, less conservative criteria are obtained in 22. Even so, as pointed out in 23, some useful terms are ignored in the derivative of the Lyapunov-Krasovskii functional, which may lead to considerable conservative- Manuscript received December 9, 2009; revised June 9, 2010 This work was supported by National Natural Science Foundation of China (No. 60874025) and Natural Science Foundation of Hunan Province of China (No. 10JJ6098). ness. Thus, there is still some room for improvement. On the other hand, many efforts are made to improve the delay-dependent conditions for delayed systems through constructing a variety of Lyapunov-Krasovskii functionals. In 24, some less conservative results for time-delay systems were obtained by introducing the augmented Lyapunov- Krasovskii functional. In addition, by constructing a new Lyapunov-Krasovskii functional approach with the idea of segmentation of delay length, new stability criteria for delayed neural networks are derived in 25. In this paper, the global asymptotic stability of static RNNs with a time-varying delay is investigated. A novel Lyapunov-Krasovskii functional is constructed, in which more information is included. Moreover, novel bounding techniques for 1 τ(t) are employed. As a result, new delay-dependent stability criteria are derived without ignoring any useful term in the derivative of the Lyapunov- Krasovskii functional. Finally, a numerical example is given to demonstrate the effectiveness and the merits of the proposed methods. Throughout this paper, N T and N 1 stand for the transpose and the inverse of the matrix N, respectively; R n denotes the n-dimensional Euclidean space; P > 0 (P 0) means that the matrix P is symmetric and positive definite (semi-positive definite); diag { } denotes a block-diagonal matrix; z is the Euclidean norm of z, and the symbol within a matrix represents the symmetric terms of the matrix, e.g., = X Y X Y Z Y T. Matrices, if not explicitly stated, are assumed to have compatible Z dimensions. 2 System description Consider the following recurrent neural network with a time-varying delay:
H. B. Zeng et al. / New Stability Criteria for Recurrent Neural Networks with a Time-varying Delay 129 { ẋ(t) = Ax(t) + f(w x(t τ(t)) + J) x(t) = φ(t), τ t 0 where x(t)=x 1(t), x 2(t),, x n(t) T R n is the neuron state vector; f(x( )) = f 1(x 1( )), f 2(x 2( )),, f n(x n( )) T R n denotes the neuron activation functions; J = j 1, j 2,, j n T R n is a constant input vector; A = diag {a 1, a 2,, a n} > 0, and W are known interconnection weight matrices; φ(t) is the initial condition; and τ(t) represents the time-varying delay and satisfies (1) 0 τ(t) τ, τ(t) µ. (2) Moreover, the neuron activation functions satisfy the following assumption. Assumption 1. For i 1, 2,, n, the neuron activation functions are bounded and satisfy the following condition: 0 fi(α1) fi(α2) α 1 α 2 l i, α 1, α 2 R (3) where l i 0 for i 1, 2,, n. Remark 1. Provided that W is invertible and W A = AW, by y(t) = W x(t) + J, static neural network (1) can be transformed into the other kind, namely, the local neural network ẏ(t) = Ay(t) + W g(y(t τ(t))) + AJ. (4) However, many static neural networks do not satisfy the transform condition. That is, systems (1) and (4) are not always equivalent. Therefore, it is necessary to study the neural networks of system (1). Under Assumption 1, there is an equilibrium x of (1). For simplicity, make the transformation z = x( ) x. Then, (1) can be transformed into { ż(t) = Az(t) + g(w z(t τ(t))) (5) z(t) = ψ(t), τ t 0 where z(t) = z 1(t), z 2(t),, z n(t) T is the state vector of transformed system (5); ψ(t) = φ(t) x is the initial condition; the transformed neuron activation function is g(z( )) = g 1(z 1( )), g 2(z 2( )),, g n(z n( )) T = f(w z( ) + W x + J) f(w x + J). Obviously, the functions g i( ) satisfy 0 gi(zi) z i l i, g i(0) = 0, z i 0 (6) for i 1, 2,, n. Let L = diag{l 1, l 2,, l n}. It is obvious that neural network (5) admits an equilibrium point z(t) 0, corresponding to the initial condition ψ(t) 0. Based on the analysis above, we know that the problem of how to analyze the stability of system (1) at equilibrium is changed into the problem of how to analyze the zero stability of system (5). 3 Main results In this section, we will present some asymptotic stability criteria for the considered neural networks. Theorem 1. For given scalars τ > 0, and µ 0, the neural network (5) is globally asymptotically stable if P 11 P 12 P 13 there exist matrices P = P12 T P 22 P 23 > 0, Q = P13 T P23 T P 33 Q 11 Q 12 Q T 0, S 0, R = 12 Q 22 R12 T > 0, R 22 X 11 X 12 Z > 0, and X = X12 T 0, diagonal matrices X 22 D = diag{d 1, d 2,, d n} 0, Y i 0, i = 1, 2, and any appropriately dimensioned matrices N = N1 T N2 T T, M = M1 T M2 T T, and H = H1 T H2 T T, such that LMIs (7) (9) hold, where Ω 11 Ω 12 M 1 Ω 14 Ω 22 M 2 P13 T R 11 P12 T Ω 44 Ω = P 12 Ω 16 H 1 P 13 P T 23 0 W T LY 2 Ω 28 Ω 35 0 0 P 23 0 Ω 46 H 2 0 R 22 0 0 0 Ω 66 0 0 Ω 77 0 Ω 88 X N Ψ 1 = 0 (8) Z X M Ψ 2 = 0 (9) Z < 0 Ω 11 = Q 11 + R 11 + H 1A + A T H T 1 + N 1 + N T 1 + τx 11 Ω 12 = N 1 + N T 2 + M 1 + τx 12 Ω 14 = P 11 + Q 12 + R 12 + H 1 + A T H T 2 Ω 16 = W T LY 1 Ω 22 = (1 µ)q 11 N 2 N T 2 + M 2 + M T 2 + τx 22 Ω 28 = P 33 Q 12 Ω 35 = P 22 R 12 Ω 44 = Q 22 + R 22 + τz + H 2 + H T 2 Ω 46 = W T D (7)
130 International Journal of Automation and Computing 8(1), February 2011 Ω 66 = S 2Y 1 Ω 77 = (1 µ)s H 1 2Y 2 Ω 88 = 1 1 + µ Q22. Proof. Obviously, the following equations are true for any matrices N = N1 T N2 T T, M = M1 T M2 T T, and H = H1 T H2 T T with appropriate dimensions: β 1 = 2ζ T 1 (t)n β 2 = 2ζ T 1 (t)m z(t) z(t τ(t)) z(t τ(t)) z(t τ) ż(α)dα τ(t) ż(α)dα = 0 (10) = 0 (11) β 3 = 2ζ2 T (t)h ż(t) + Az(t) g(w z(t τ(t))) = 0 (12) where ζ 1(t) = z T (t) z T (t τ(t)) T and ζ 2(t) = z T (t) ż T (t) T. On the other hand, for any semi-positive definite matrix X with appropriate dimensions, the following equation holds: β 4 = τζ T 1 (t)xζ 1(t) t τ(t) ζ T 1 (t)xζ 1(t)dα = ζ T 1 (t)xζ 1(t)dα ζ T 1 (t)xζ 1(t)dα ζ T 1 (t)xζ 1(t)dα = 0. (13) By Assumption 1, it can be deduced that for any diagonal matrices Y i 0, i = 1, 2 β 5 = 2g T (W z(t))y 1LW z(t) g(w z(t)) 0 (14) β 6 = 2g T (W z(t τ(t)))y 2LW z(t τ(t)) g(w z(t τ(t))) 0. (15) Construct the following Lyapunov-Krasovskii functional candidate: n Wi V (z t) = ζ3 T z(t) (t)p ζ 3(t) + 2 d i g i(α)dα+ 0 τ i=1 0 ζ T 2 (α)qζ 2(α)dα + g T (W z(α))sg(w z(α))dα+ t+θ ż T (α)zż(α)dαdθ ζ T 2 (α)rζ 2(α)dα+ (16) where ζ 3(t) = z T (t) z T (t τ) z T (t τ(t)) T, W i denotes P 11 P 12 P 13 the i-th row of matrix W, and P = P12 T P 22 P 23 > P13 T P23 T P 33 Q 11 Q 12 0, Q = Q T 0, R = 12 Q 22 Z > 0 are to be determined. R12 T R 22 > 0, S 0, Now, calculating the derivative of V (z t) along the solutions of neural network (5), we have V (z t) = 2ζ T 3 (t)p ζ 3(t) + 2g T (W z(t))dw ż(t)+ g T (W z(t))sg(w z(t)) + ζ T 2 (t)(q + R)ζ 2(t) (1 τ(t))ζ T 2 (t τ(t))qζ 2(t τ(t)) (1 τ(t))g T (W z(t τ(t))sg(w z(t τ(t)) ζ T 2 (t τ)rζ 2(t τ) + τż T (t)zż(t) ż T (α)zż(α)dα. (17) Adding the left sides of (10) (15) to (17) yields P 11 P 12 P 13 ż(t) V (z t) 2ζ3 T (t) P 22 P 23 ż(t τ) + P 33 η 3(t) 2g T (W z(t))dw ż(t) + g T (W z(t))sg(w z(t))+ z(t) ż(t) T R 11 + Q 11 R 12 + Q 12 R 22 + Q 22 (1 µ)z T (t τ(t))q 11z(t τ(t)) z(t) ż(t) 2z T (t τ(t))q 12η 3(t) 1 1 + µ ηt 3 (t)q 22η 3(t) (1 µ)g T (W z(t τ(t))sg(w z(t τ(t)) z(t τ) ż(t τ) T R 22 τż T (t)zż(t) ż T (α)zż(α)dα+ β 1 + β 2 + β 3 + β 4 + β 5 + β 6 = z(t τ) ż(t τ) ξ1 T (t)ωξ 1(t) ξ2 T (t, α)ψ 1ξ 2(t, α)dα τ(t) ξ T 2 (t, α)ψ 2ξ 2(t, α)dα (18) where η 1(t) = z T (t) z T (t τ(t)) z T (t τ) ż T (t) η 2(t) = ż T (t τ) g T (W z(t)) g T (W z(t τ(t)) η 3(t) = (1 τ(t))ż(t τ(t)) ξ 1(t) = η1 T (t) η2 T (t) η3 T (t) T T ξ 2(t, α) = z T (t) z T (t τ(t)) ż T (α). + T T
H. B. Zeng et al. / New Stability Criteria for Recurrent Neural Networks with a Time-varying Delay 131 If Ω < 0, Ψ 1 0, and Ψ 2 0, then V (z t) < 0 for any ξ 1(t) 0, ξ 2(t, α) 0, and the global asymptotic stability for neural network (5) is achieved. Remark 2. The Lyapunov-Krasovskii functional (16) in this paper is different from those in 21, 22. In (16), z(t τ(t)), z(t τ), and ż(t) have been taken into account, which may reduce the conservativeness of the derived result. Remark 3. In the definition of ξ 1(t) in (18), it is (1 τ(t))ż T (t τ(t)) but not ż T (t τ(t)) that is introduced. This definition makes (1 τ(t)) only appear in Ω 22 and Ω 88. Using the bounding information of τ(t), these two terms can be easily bounded. Remark 4. In 21, the delay term τ(t) with 0 τ(t) τ was enlarged to τ and another term τ τ(t) was also enlarged to τ, i.e., τ = τ(t) + τ τ(t) was enlarged to 2τ. In contrast, the relationship among τ(t), τ τ(t), and τ is taken into account in the proof of Theorem 1. Therefore, the stability condition given in Theorem 1 is expected to be less conservative. Remark 5. By setting Q 22 = 0, Q 11 = 0, and S = 0, respectively, Theorem 1 can be used in those two cases of τ(t) that the lower bound is unknown and the upper bound is known, and that the upper bound is unknown and the lower bound is known. Finally, in the case of µ being unknown or τ(t) being nondifferentiable, by setting P 13 = 0, P 23 = 0, P 33 = 0, Q = 0, and S = 0 in the Lyapunov-Krasovskii functional (17), and following the similar line as Theorem 1, the following corollary can be obtained. Corollary 1. For a given scalar τ > 0, the neural network (5) is globally asymptotically stable if there P 11 P 12 exist symmetric matrices P = P12 T > 0, R = P 22 X 11 X 12 R12 T > 0, Z > 0, and X = R 22 X12 T 0, X 22 diagonal matrices D = diag{d 1, d 2,, d n} 0, Y i 0, i = 1, 2, and any appropriately dimensioned matrices N = N1 T N2 T T, M = M1 T M2 T T, and H = H1 T H2 T T, such that LMIs (8), (9), and (19) hold: Ω 11 Ω 12 M 1 Ω14 P 12 Ω 16 H 1 where Ω22 M 2 0 0 0 W T LY 2 R 11 P12 T Ω 35 0 0 Ω44 0 Ω 46 H 2 R 22 0 0 Ω66 0 Ω77 Ω 11 = R 11 + H 1A + A T H T 1 + N 1 + N T 1 + τx 11 Ω 14 = P 11 + R 12 + H 1 + A T H T 2 Ω 22 = N 2 N T 2 + M 2 + M T 2 + τx 22 Ω 44 = R 22 + τz + H 2 + H T 2 Ω 66 = 2Y 1 Ω 77 = H 1 2Y 2 < 0 (19) and Ω 12, Ω 16, Ω 35, and Ω 46 are defined in Theorem 1. 4 Numerical examples This section provides a numerical example that demonstrates the effectiveness of the criteria presented in this paper. Consider a delayed neural network (5) with the following parameters: A = W = 7.3458 0 0 0 6.9987 0 0 0 5.5959 13.6014 2.9616 0.6936 7.4736 21.6810 3.2100 0.7920 2.6334 20.1300 The activation functions are taken as follows: g 1(x) = tanh(0.3680x) g 2(x) = tanh(0.1795x) g 3(x) = tanh(0.2876x).. It is clear that the activation functions satisfy (6) with L = diag{0.3680, 0.1795, 0.2876}. For various µ, the computed upper bounds of τ, which guarantee that the neural network is globally asymptotically stable, are listed in Table 1. It can be seen that the result we proposed is less conservative than existing ones. Table 1 Allowable upper bounds of τ for different µ µ 0.5 0.9 1.0 Any µ Shao 21 0.3733 0.2343 0.2313 0.2313 Wu 22 0.4265 0.3217 0.3211 0.3211 Theorem 1 0.4602 0.3686 0.3661 Corollary 1 0.3218 Assuming that the initial state is z(t) = 1, 0.5, 1 T and τ(t) = 0.3661sin 2 (t), the simulation result given in Fig. 1 further verifies the effectiveness of the proposed method. 5 Conclusions This paper has studied the stability problem of static RNNs with a time-varying delay. By employing an augmented Lyapunov-Krasovskii functional and novel bounding techniques for 1 τ(t), some less conservative delaydependent criteria have been derived. A numerical example has been given to demonstrate the effectiveness and the merits of the proposed methods.
132 International Journal of Automation and Computing 8(1), February 2011 9 Y. R. Liu, Z. D. Wang, A. Serranob, X. H. Liu. Discretetime recurrent neural networks with time-varying delays: Exponential stability analysis. Physics Letters A, vol. 362, no. 5 6, pp. 480 488, 2007. 10 Z. G. Zeng, J. Wang. Improved conditions for global exponential stability of recurrent neural networks with timevarying delays. IEEE Transactions on Neural Networks, vol. 17, no. 3, pp. 623 635, 2006. 11 Z. G. Zeng, J. Wang, X. X. Liao. Global exponential stability of a general class of recurrent neural networks with timevarying delays. IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications, vol. 50, no. 10, pp. 1353 1358, 2003. Fig. 1 State trajectory of the RNN in the example References 1 P. J. Angeline, G. M. Saunders, J. B. Pollack. An evolutionary algorithm that constructs recurrent neural networks. IEEE Transactions on Neural Networks, vol. 5, no. 1, pp. 54 65, 1994. 2 C. C. Ku, K. Y. Lee. Diagonal recurrent neural networks for dynamic systems control. IEEE Transactions on Neural Networks, vol. 6, no. 1, pp. 144 156, 1995. 3 A. N. Michel, D. Liu. Qualitative Analysis and Synthesis of Recurrent Neural Networks, New York, USA: Marcel Dekker, 2002. 4 Y. R. Liu, Z. D. Wang, X. H. Liu. Global exponential stability of generalized recurrent neural networks with discrete and distributed delays. Neural Networks, vol. 19, no. 5, pp. 667 675, 2006. 5 T. Li, Q. Luo, C. Y. Sun, B. Y. Zhang. Exponential stability of recurrent neural networks with time-varying discrete and distributed delays. Nonlinear Analysis: Real World Applications, vol. 10, no. 4, pp. 2581 2589, 2009. 6 J. D. Cao, J. Wang. Global asymptotic and robust stability of recurrent neural networks with time delays. IEEE Transactions on Circuits Systems I: Regular Papers, vol. 52, no. 2, pp. 417 426, 2005. 7 C. G. Li, X. F. Liao. Robust stability and robust periodicity of delayed recurrent neural networks with noise disturbance. IEEE Transactions on Circuits Systems I: Regular Papers, vol. 53, no. 10, pp. 2265 2273, 2006. 8 X. F. Liao, K. W. Wong. Robust stability of interval bidirectional associative memory neural network with time delays. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 34, no. 2, pp. 1142 1154, 2004. 12 M. Wu, F. Liu, P. Shi, Y. He, R. Yokoyama. Improved freeweighting matrix approach for stability analysis of discretetime recurrent neural networks with time-varying delay. IEEE Transactions on Circuits and Systems II: Express Briefs, vol. 55, no. 7, pp. 690 694, 2008. 13 J. Sun, G. P. Liu, J. Chen, D. Rees. Improved stability criteria for neural networks with time-varying delay. Physics Letters A, vol. 373, no. 3, pp. 342 348, 2009. 14 O. M. Kwon, J. H. Park. Improved delay-dependent stability criterion for neural networks with time-varying delays. Physics Letters A, vol. 373, no. 5, pp. 529 535, 2009. 15 Y. Y. Wu, T. Li, Y. Q. Wu. Improved exponential stability criteria for recurrent neural networks with time-varying discrete and distributed delays. International Journal of Automation and Computing, vol. 7, no. 2, pp. 199 204, 2010. 16 G. D. Zong, J. Liu. New delay-dependent global asymptotic stability condition for Hopfield neural networks with timevarying delays. International Journal of Automation and Computing, vol. 6, no. 4, pp. 415 419, 2009. 17 Y. He, G. P. Liu, D. Rees. New delay-dependent stability criteria for neural networks with time-varying delay. IEEE Transactions on Neural Networks, vol. 18, no. 1, pp. 310 314, 2007. 18 Z. B. Xu, H. Qiao, J. G. Peng, B. Zhang. A comparative study on two modeling approaches in neural networks. Neural Networks, vol. 17, no. 1, pp. 73 85, 2004. 19 S. Y. Xu, J. Lam, D. W. C. Ho, Y. Zou. Global robust exponential stability analysis for interval recurrent neural networks. Physics Letters A, vol. 325, no. 2, pp. 124 133, 2004. 20 J. L. Liang, J. D. Cao. A based-on LMI stability criterion for delayed recurrent neural networks. Chaos, Solitons and Fractals, vol. 28, no. 1, pp. 154 160, 2006.
H. B. Zeng et al. / New Stability Criteria for Recurrent Neural Networks with a Time-varying Delay 133 21 H. Y. Shao. Delay-dependent stability for recurrent neural networks with time-varying delays. IEEE Transactions on Neural networks, vol. 19, no. 9, pp. 1647 1651, 2008. 22 Y. Y. Wu, Y. Q. Wu. Stability analysis for recurrent neural networks with time-varying delay. International Journal of Automation and Computing, vol. 6, no. 3, pp. 223 227, 2009. 23 Y. He, G.P. Liu, D. Rees, M. Wu. Stability analysis for neural networks with time-varying interval delay. IEEE Transactions on Neural Networks, vol. 18, no. 6, pp. 1850 1854, 2007. 24 Y. He, Q. G. Wang, C. Lin, M. Wu. Augmented Lyapunov functional and delay-dependent stability criteria for neutral systems. International Journal of Robust and Nonlinear Control, vol. 15, no. 18, pp. 923 933, 2005. 25 X. M. Zhang, Q. L. Han. New Lyapunov-Krasovskii functionals for global asymptotic stability criteria of delayed neural networks. IEEE Transactions on Neural networks, vol. 20, no. 3, pp. 533 539, 2009. Hong-Bing Zeng received the B. Sc. degree in electrical engineering from Tianjin University of Technology and Education, Tianjin, PRC in 2003, and M. Sc. degree in computer science from Central South University of Forestry, Changsha, PRC in 2006. He is currently a Ph. D. candidate in control science and engineering from Central South University. Since July 2003, he has been with the School of Electrical and Information Engineering, Hunan University of Technology, Zhuzhou, PRC, where he became a lecturer in 2008. His research interests include time-delay systems and networked control systems. E-mail: 9804zhb@163.com (Corresponding author) Shen-Ping Xiao received the B. Sc. degree in engineering from Northeastern University, Shenyang, PRC in 1988, and Ph. D. degree in control science and engineering from Central South University, Changsha, PRC in 2008. Currently, he is a professor in the School of Electrical and Information Engineering, Hunan University of Technology, Zhuzhou, PRC. His research interests include robust control and its applications, intelligent control, and process control. E-mail: xsph 519@163.com Bin Liu received the M. Sc. degree from the Department of Mathematics, East China Normal University, Shanghai, PRC in 1993, and Ph. D. degree from the Department of Control Science and Engineering, Huazhong University of Science and Technology, Wuhan, PRC in June 2003. He was a postdoctoral fellow at the Huazhong University of Science and Technology from July 2003 to July 2005, a postdoctoral fellow at the University of Alberta, Edmonton, AB, Canada from August 2005 to October 2006, and a visiting research fellow at the Hong Kong Polytechnic University, Hong Kong, PRC in 2004. Since July 1993, he has been with the School of Information and Computation Science and School of Electrical and Information Engineering, Hunan University of Technology, Zhuzhou, PRC, where he became an associate professor in 2001, and a professor in 2004. Now, he is a research fellow in the School of Engineering, the Australian National University, ACT, Australia. His research interests include stability analysis and applications of nonlinear systems and hybrid systems, optimal control and stability, chaos and network synchronization and control and Lie algebra. E-mail: oliverliu78@163.com