EXPONENTIAL SYNCHRONIZATION OF DELAYED REACTION-DIFFUSION NEURAL NETWORKS WITH GENERAL BOUNDARY CONDITIONS

Similar documents
Analysis of stability for impulsive stochastic fuzzy Cohen-Grossberg neural networks with mixed delays

Adaptive synchronization of chaotic neural networks with time delays via delayed feedback control

Research Article Delay-Dependent Exponential Stability for Discrete-Time BAM Neural Networks with Time-Varying Delays

A DELAY-DEPENDENT APPROACH TO DESIGN STATE ESTIMATOR FOR DISCRETE STOCHASTIC RECURRENT NEURAL NETWORK WITH INTERVAL TIME-VARYING DELAYS

Synchronization of a General Delayed Complex Dynamical Network via Adaptive Feedback

EXISTENCE AND EXPONENTIAL STABILITY OF ANTI-PERIODIC SOLUTIONS IN CELLULAR NEURAL NETWORKS WITH TIME-VARYING DELAYS AND IMPULSIVE EFFECTS

CONVERGENCE BEHAVIOUR OF SOLUTIONS TO DELAY CELLULAR NEURAL NETWORKS WITH NON-PERIODIC COEFFICIENTS

RECENTLY, many artificial neural networks especially

Bidirectional Partial Generalized Synchronization in Chaotic and Hyperchaotic Systems via a New Scheme

DISSIPATIVITY OF NEURAL NETWORKS WITH CONTINUOUSLY DISTRIBUTED DELAYS

Time-delay feedback control in a delayed dynamical chaos system and its applications

H Synchronization of Chaotic Systems via Delayed Feedback Control

Finite-time hybrid synchronization of time-delay hyperchaotic Lorenz system

Exponential stability for fuzzy BAM cellular neural networks with distributed leakage delays and impulses

pth moment exponential stability of stochastic fuzzy Cohen Grossberg neural networks with discrete and distributed delays

Anti-periodic Solutions for A Shunting Inhibitory Cellular Neural Networks with Distributed Delays and Time-varying Delays in the Leakage Terms

Robust Gain Scheduling Synchronization Method for Quadratic Chaotic Systems With Channel Time Delay Yu Liang and Horacio J.

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

Correspondence should be addressed to Chien-Yu Lu,

Anti-synchronization of a new hyperchaotic system via small-gain theorem

ADAPTIVE CONTROLLER DESIGN FOR THE ANTI-SYNCHRONIZATION OF HYPERCHAOTIC YANG AND HYPERCHAOTIC PANG SYSTEMS

Study on Proportional Synchronization of Hyperchaotic Circuit System

Research Article Mathematical Model and Cluster Synchronization for a Complex Dynamical Network with Two Types of Chaotic Oscillators

Linear matrix inequality approach for synchronization control of fuzzy cellular neural networks with mixed time delays

Global Asymptotic Stability of a General Class of Recurrent Neural Networks With Time-Varying Delays

The Application of Contraction Theory in Synchronization of Coupled Chen Systems

MULTIPLE POSITIVE SOLUTIONS FOR KIRCHHOFF PROBLEMS WITH SIGN-CHANGING POTENTIAL

BOUNDARY VALUE PROBLEMS OF A HIGHER ORDER NONLINEAR DIFFERENCE EQUATION

Tracking the State of the Hindmarsh-Rose Neuron by Using the Coullet Chaotic System Based on a Single Input

A Generalization of Some Lag Synchronization of System with Parabolic Partial Differential Equation

Delay-dependent Stability Analysis for Markovian Jump Systems with Interval Time-varying-delays

Generalized projective synchronization between two chaotic gyros with nonlinear damping

Research Article Finite-Time Robust Stabilization for Stochastic Neural Networks

Improved delay-dependent globally asymptotic stability of delayed uncertain recurrent neural networks with Markovian jumping parameters

SIMULTANEOUS AND NON-SIMULTANEOUS BLOW-UP AND UNIFORM BLOW-UP PROFILES FOR REACTION-DIFFUSION SYSTEM

Chaos suppression of uncertain gyros in a given finite time

IN THIS PAPER, we consider a class of continuous-time recurrent

3. Controlling the time delay hyper chaotic Lorenz system via back stepping control

Tracking Control of a Class of Differential Inclusion Systems via Sliding Mode Technique

On a non-autonomous stochastic Lotka-Volterra competitive system

ANALYSIS AND APPLICATION OF DIFFUSION EQUATIONS INVOLVING A NEW FRACTIONAL DERIVATIVE WITHOUT SINGULAR KERNEL

Generalized Function Projective Lag Synchronization in Fractional-Order Chaotic Systems

HYBRID CHAOS SYNCHRONIZATION OF HYPERCHAOTIC LIU AND HYPERCHAOTIC CHEN SYSTEMS BY ACTIVE NONLINEAR CONTROL

ADAPTIVE SYNCHRONIZATION FOR RÖSSLER AND CHUA S CIRCUIT SYSTEMS

Curriculum Vitae Bin Liu

Chaos Synchronization of Nonlinear Bloch Equations Based on Input-to-State Stable Control

Stationary distribution and pathwise estimation of n-species mutualism system with stochastic perturbation

PULSE-COUPLED networks (PCNs) of integrate-and-fire

Network synchronizability analysis: The theory of subgraphs and complementary graphs

Stability and bifurcation of a simple neural network with multiple time delays.

Global existence of periodic solutions of BAM neural networks with variable coefficients

Generalized projective synchronization of a class of chaotic (hyperchaotic) systems with uncertain parameters

EXISTENCE OF SOLUTIONS FOR KIRCHHOFF TYPE EQUATIONS WITH UNBOUNDED POTENTIAL. 1. Introduction In this article, we consider the Kirchhoff type problem

Research Article Mean Square Stability of Impulsive Stochastic Differential Systems

Impulsive synchronization of chaotic systems

GLOBAL CHAOS SYNCHRONIZATION OF HYPERCHAOTIC QI AND HYPERCHAOTIC JHA SYSTEMS BY ACTIVE NONLINEAR CONTROL

EXISTENCE OF SOLUTIONS TO THE CAHN-HILLIARD/ALLEN-CAHN EQUATION WITH DEGENERATE MOBILITY

A Novel Chaotic Neural Network Architecture

Multiperiodicity and Exponential Attractivity Evoked by Periodic External Inputs in Delayed Cellular Neural Networks

NON-CONSTANT POSITIVE STEADY STATES FOR A STRONGLY COUPLED NONLINEAR REACTION-DIFFUSION SYSTEM ARISING IN POPULATION DYNAMICS

Applied Mathematics Letters

Research Article Existence of Almost-Periodic Solutions for Lotka-Volterra Cooperative Systems with Time Delay

Some Properties of NSFDEs

APPROACHES based on recurrent neural networks for

Asymptotic stability of solutions of a class of neutral differential equations with multiple deviating arguments

arxiv: v1 [math.ap] 24 Oct 2014

New Stability Criteria for Recurrent Neural Networks with a Time-varying Delay

Backstepping synchronization of uncertain chaotic systems by a single driving variable

Impulsive Stabilization of certain Delay Differential Equations with Piecewise Constant Argument

Function Projective Synchronization of Fractional-Order Hyperchaotic System Based on Open-Plus-Closed-Looping

ADAPTIVE DESIGN OF CONTROLLER AND SYNCHRONIZER FOR LU-XIAO CHAOTIC SYSTEM

THE ACTIVE CONTROLLER DESIGN FOR ACHIEVING GENERALIZED PROJECTIVE SYNCHRONIZATION OF HYPERCHAOTIC LÜ AND HYPERCHAOTIC CAI SYSTEMS

Construction of a New Fractional Chaotic System and Generalized Synchronization

Distributed Adaptive Synchronization of Complex Dynamical Network with Unknown Time-varying Weights

Breakdown of Pattern Formation in Activator-Inhibitor Systems and Unfolding of a Singular Equilibrium

Existence of permanent oscillations for a ring of coupled van der Pol oscillators with time delays

WELL-POSEDNESS OF THE TWO-DIMENSIONAL GENERALIZED BENJAMIN-BONA-MAHONY EQUATION ON THE UPPER HALF PLANE

NONTRIVIAL SOLUTIONS TO INTEGRAL AND DIFFERENTIAL EQUATIONS

SYNCHRONIZATION IN SMALL-WORLD DYNAMICAL NETWORKS

Stability and hybrid synchronization of a time-delay financial hyperchaotic system

Applied Mathematics Letters. Stationary distribution, ergodicity and extinction of a stochastic generalized logistic system

Impulsive Stabilization for Control and Synchronization of Chaotic Systems: Theory and Application to Secure Communication

Impulsive control for permanent magnet synchronous motors with uncertainties: LMI approach

ADAPTIVE CONTROL AND SYNCHRONIZATION OF HYPERCHAOTIC NEWTON-LEIPNIK SYSTEM

ADAPTIVE CHAOS SYNCHRONIZATION OF UNCERTAIN HYPERCHAOTIC LORENZ AND HYPERCHAOTIC LÜ SYSTEMS

Memoirs on Differential Equations and Mathematical Physics

Research Article On the Stability Property of the Infection-Free Equilibrium of a Viral Infection Model

IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 21, NO. 1, JANUARY /$ IEEE

Robust set stabilization of Boolean control networks with impulsive effects

STUDY OF SYNCHRONIZED MOTIONS IN A ONE-DIMENSIONAL ARRAY OF COUPLED CHAOTIC CIRCUITS

EXPONENTIAL STABILITY OF SWITCHED LINEAR SYSTEMS WITH TIME-VARYING DELAY

Function Projective Synchronization of Discrete-Time Chaotic and Hyperchaotic Systems Using Backstepping Method

698 Zou Yan-Li et al Vol. 14 and L 2, respectively, V 0 is the forward voltage drop across the diode, and H(u) is the Heaviside function 8 < 0 u < 0;

Results on stability of linear systems with time varying delay

Controlling the Period-Doubling Bifurcation of Logistic Model

Visual Selection and Attention Shifting Based on FitzHugh-Nagumo Equations

Nontrivial Solutions for Boundary Value Problems of Nonlinear Differential Equation

ADAPTIVE STABILIZATION AND SYNCHRONIZATION OF HYPERCHAOTIC QI SYSTEM

Runge-Kutta Method for Solving Uncertain Differential Equations

Research Article Convex Polyhedron Method to Stability of Continuous Systems with Two Additive Time-Varying Delay Components

Transcription:

ROCKY MOUNTAIN JOURNAL OF MATHEMATICS Volume 43, Number 3, 213 EXPONENTIAL SYNCHRONIZATION OF DELAYED REACTION-DIFFUSION NEURAL NETWORKS WITH GENERAL BOUNDARY CONDITIONS PING YAN AND TENG LV ABSTRACT. In this paper, we study delayed reactiondiffusion neural networks with general boundary conditions including Dirichlet and Neumann boundary conditions. By using some inequality techniques and constructing suitable Lyapunov functionals, some sufficient conditions are given to ensure the exponential synchronization of the drive-response neural networks. Finally, an example is given to verify the theoretical analysis. 1. Introduction. In recent years, different types of neural networks with or without delays [2, 4 6, 1, 14, 15, 16, 18 2, 23, 26 3, have been widely investigated due to their applicability in solving some problems for image processing, signal processing and pattern recognition. Since chaos synchronization was introduced by Pecora and Carroll in the 199s [12, 13 by proposing the drive-response concept, its potential applications are found in many different areas including secure communication, chaos generators, chemical reactions, biological systems, information science, etc. As artificial neural networks can exhibit some complicated dynamics and even chaotic behaviors, the synchronization of coupled chaos systems and chaos neural networks has received considerable attention [1, 7 9, 11, 17, 21, 22, 24, 25. Because of the finite processing speed of information, it is sometimes necessary to take account of time delays in the modeling of biological or artificial neural networks. Time delays may lead to bifurcation, oscillation, divergence or instability, which may be harmful to a system; thus, it is extremely important to study the synchronization of delayed Keywords and phrases. Exponential synchronization, delayed neural networks, general boundary condition, Lyapunov functional. This research was supported by Science Research Plan Project of Anhui Agricultural University (No. YJ21-12 and No. JF212-27) and National Natural Science Foundation of China (No. 11212). The second author is the corresponding author. Received by the editors on May 1, 21, and in revised form on September 2, 21. DOI:1.1216/RMJ-213-43-3-137 Copyright c 213 Rocky Mountain Mathematics Consortium 137

138 PING YAN AND TENG LV neural networks [1, 7 9, 11, 17, 21, 22, 24, 25. Although the models of delayed feedback with discrete delays are a good approximation in simple circuits, neural networks usually have a spatial nature due to the presence of a number of parallel pathways of variety of axon sizes and length. A distribution of conduction velocities along these pathways will lead to a distribution of propagation delays. Therefore, the models with time-varying delays and continuous distributed delays are more appropriate to the synchronization of neural networks [9, 17, 22, 24. Just as diffusion effects cannot be avoided in the research of biological and artificial neural networks, they must be considered in the research of synchronization of chaotic neural networks because electrons are moving in the asymmetric electromagnetic field and the activations vary in space as well as in time. References [9, 17, 22, 24 have considered the synchronization of neural networks with diffusion terms, which are expressed by partial differential equations. In these papers, the boundary conditions are all assumed to be Dirichlet boundary conditions with the unknown functions being zeros, or to be that the derivatives of unknown functions with respect to spatial variables are zeros, which are special cases of Neumann boundary conditions. To the best of our knowledge, exponential synchronization has not been considered for delayed reaction-diffusion neural networks with general boundary conditions. In this paper, by using some inequality techniques and the Lyapunov functional, exponential synchronization will be investigated for delayed reaction-diffusion neural networks with general boundary conditions. Some sufficient conditions are presented, which only rely upon the coefficients in the drive networks and the suitably designed controller gain matrix in response networks. It is easy to verify these conditions because they are expressed by algebraic inequalities. These results generalize the corresponding ones in [9, 17, 22, 24. This paper is organized as follows. In Section 2, model description and main results are presented. In Section 3, by constructing a suitable Lyapunov functional, some sufficient conditions are obtained to ensure exponential synchronization of the drive and response neural networks. In Section 4, an example is given to verify the theoretical analysis. Section 5 contains the conclusions of the paper.

NEURAL NETWORK EXPONENTIAL SYNCHRONIZATION 139 2. Model description and main results. Consider the following delayed reaction-diffusion neural networks with general boundary conditions: (1) u i (x, t) t = m ( ) u i (x, t) D i a i u i (x, t) c ij f j (u j (x, t)) ω ij f j (u j (x, t t ij (t))) b ij κ ij (t s)f j (u j (x, s)) ds I i, u i (x, t) =ϕ i (x, t), <t, x, u i (x, t) σ i1 σ i2 u i (x, t) =, n (x,σ i1,σ i2 andσi1 2 σ2 i2 ), where i = 1,...,n. n is the number of neurons in the networks. u(x, t) =(u 1 (x, t),...,u n (x, t)) T, u i (x, t) denotes the state of the ith neural unit at time t and in space x. is a bounded open domain in R m with smooth boundary andmes> denotes the measure of. The smooth function D i = D i (x, t, u) corresponds to the transmission diffusion coefficient along the ith unit. a i > isa constant and represents the rate with which the ith neuron will reset its potential to the resting state in isolation when it is disconnected from networks and external inputs. c ij, ω ij and b ij are constants and represent the weights of neuron interconnection without delays and with delays, respectively. t ij (t) corresponds to transmission delays along the axon of the jth neuron from the ith neuron at time t and satisfies t ij (t) τ, t ij (t) <ρ<1, where τ > and ρ < 1 are constants. f i ( ) shows how the ith neuron reacts to the input. κ ij ( ), (i, j =1,...,n) are delay kernels. I =(I 1,...,I n ) T, I i is the input from outside the system. ϕ(x, t) = (ϕ 1 (x, t),...,ϕ n (x, t)) T, for any given i =1,...,n, ϕ i (x, t) isagiven

14 PING YAN AND TENG LV smooth function defined on (, with the norm ϕ 2 = n where ϕ i (x, ) = ϕ i (x, ) 2 dx, sup ϕ i (x, s). <s Chaos dynamics is extremely sensitive to initial conditions. Even infinitesimal changes in initial conditions will lead to an asymptotic divergence of orbits. In order to observe the synchronization behavior in this class of neural networks, we give two neural networks in which the drive system with state variable denoted by u i (x, t) drives the response system having identical dynamical equations with the state variable denoted by ũ i (x, t). However, the initial condition of the system derived is different from that of the response system. Therefore, response neural networks can be described as in the following model. (2) ũ i (x, t) t = m ũ i (x, t) (D i ) a i ũ i (x, t) c ij f j (ũ j (x, t)) ω ij f j (ũ j (x, t t ij (t))) b ij I i v i (x, t), κ ij (t s)f j (ũ j (x, s)) ds ũ i (x, t) =ψ i (x, t), <t, x, ũ i (x, t) σ i1 σ i2 ũ i (x, t) =, n (x,σ i1,σ i2 andσi1 2 σi2 2 ), where i = 1,...,n. v = (v 1,...,v n ) T is the control input vector, and v i stands for the external control input that will be appropriately designed for a certain control objective. In this paper, the control input

NEURAL NETWORK EXPONENTIAL SYNCHRONIZATION 141 vector v =(v 1,...,v n ) T is assumed to take as the following form (3) (v 1,...,v n ) T = M(u 1 (x, t) ũ 1 (x, t),...,u n (x, t) ũ n (x, t)) T, where M =(M ij ) n n is the controller gain matrix and will be appropriately chosen for synchronization in both drive and response systems. ψ =(ψ 1,...,ψ n ) T, ψ i (x, s)(i =1,...,n) are bounded smooth functionsdefinedon (, with the norm where ψ 2 = n ψ i (x, ) 2 dx, ψ i (x, ) = sup ψ i (x, s). <s In order to deal with the case in which σ i1 =,σ i2 (i =1,...,n), we introduce the following lemma [3. Lemma 1. Let be a bounded open domain in R m with smooth boundary. Ifu = u(x) defined on is a smooth function with u =, then the following inequality holds: (4) 1/m 2 1/m u u 2 (x) dx dx = m 2 u dx, ω m x ω m where denotes the volume of and ω m denotes the surface area of unit ball in R m. Throughout the paper, we always assume that system (1) has a smooth solution u(x, t) with the norm u(,t) 2 = n u i (x, t) 2 dx, for all t [, ).

142 PING YAN AND TENG LV Definition 1. The drive system (1) and the response system (2) are said to be exponentially synchronized if there exist positive constants γ>andε> such that u(,t) ũ(,t) 2 γ ϕ ψ 2 exp εt, for all t, where the constant ε is said to be the exponential synchronization rate. Definition 2. Let V : R R be a continuous function. The upper right-hand Dini derivative D V is defined by D V = lim sup h V (t h) V (t). h In this paper, we always assume that: (H1) There exist constants B i (i =1,...,n) such that D i (x, t, u) B i, for all x, for all t [, ), for all u R n, i =1,...,n. (H2) There exist positive constants β i (i =1,...,n) such that f i (x i ) f i (y i ) β i x i y i, for all x i,y i R, i =1,...,n. (H3) The delay kernels κ ij (i, j = 1,...,n) satisfy the following assumptions: (5) κ ij :[, ) [, ) is a bounded and continuous function. (6) (7) κ ij (s) ds =1. There exists a constant σ> such that κ ij (s)e 2σs ds <.

NEURAL NETWORK EXPONENTIAL SYNCHRONIZATION 143 (H4) For any given i =1,...,n, there exist constants δ>, p ij, p ij, r ij, rij (j =1,...,n) such that 2a i 2B i [(β j c ij ) 2pij (β i c ji ) 2 2pji δ 1 [(β j ω ij ) 2p ij 1 ρ (β i ω ji ) 2 2p ji [(β j b ij ) 2rij (β i b ji ) 2 2rji [ M ij 2r ij Mji 2 2r ji <, where ρ<1 is defined by the time-varying delays. Remark 1. In (H4), positive constant δ depends upon the volume of the domain and the surface area of the unit ball in R m. Remark 2. (H4) includes the constants B i defined according to the diffusion coefficients D i, which implies that the diffusion coefficients will affect the exponential synchronization of reaction-diffusion neural networks. r ij (H5) For any given i =1,...,n, there exist constants p ij, p ij, r ij, (j =1,...,n) such that 2a i [(β j c ij ) 2pij (β i c ji ) 2 2pji 1 [(β j ω ij ) 2p ij 1 ρ (β i ω ji ) 2 2p ji [(β j b ij ) 2rij (β i b ji ) 2 2rji [ M ij 2r ij Mji 2 2r ji <.

144 PING YAN AND TENG LV Remark 3. Assumption (H5) is stronger than Assumption (H4), that is, if (H5) holds, then (H4) must hold. The main result is the following theorem: Theorem 1. (1) If σ i1 =(i =1,...,n) and (H1) (H4) hold, then the drive-response neural networks (1) and (2) are exponential synchronization. (2) If σ i1 (i =1,...,n) and (H2) (H3), (H5) hold, then the drive-response neural networks (1) and (2) are exponential synchronization. Remark 4. Let p ij = p ij = r ij = rij =1/2. Then (H4) can be changed into the following inequality: (H4) 1 2a i 2B i δ [ β j c ij β i c ji β j ω ij 1 1 ρ (β i ω ji )β j b ij β i b ji M ij M ji <. Remark 5. Let p ij = p ij = r ij = rij =1/2. Then (H5) can be changed into the following inequality: (H5) 1 2a i [ β j c ij β i c ji β j ω ij 1 1 ρ (β i ω ji )β j b ij β i b ji M ij M ji <. Then, by Theorem 1, we have the following corollary. Corollary 1. (1) If σ i1 =(i =1,...,n) and (H1) (H3) hold, and, further, if (H4) 1 holds, then the drive-response neural networks (1) and (2) are exponential synchronization.

NEURAL NETWORK EXPONENTIAL SYNCHRONIZATION 145 (2) If σ i1 (i =1,...,n) and (H2) (H3) hold, and, further, if (H5) 1 holds, then the drive-response neural networks (1) and (2) are exponential synchronization. 3. Proof of the main result. Let us define the synchronization error signal ε i (x, t) = u i (x, t) ũ i (x, t), where u i (x, t) and ũ i (x, t) are the ith state variables of the drive and response neural networks, respectively. Therefore, the dynamics error between (1) and (2) can be expressed as follows: (8) ε i (x, t) m ε i (x, t) = D i a i ε i (x, t) t c ij (f j (u j (x, t)) f j (ũ j (x, t))) ω ij (f j (u j (x, t t ij (t))) f j (ũ j (x, t t ij (t)))) b ij κ ij (t s)(f j (u j (x, s)) f j (u j (x, s))) ds v i, ε i (x, t) =ϕ i (x, t) ψ i (x, t), <t, x, ε i (x, t) σ i1 σ i2 ε i (x, t) =, n (x,σ i1,σ i2 andσi1 2 σi2 2 ), where i =1,...,n and the control input vector v(x, t) =(v 1 (x, t),..., v n (x, t)) T takes the form defined in (3). By (7) and assumptions (H4) or (H5), there exists a positive constant λ such that (9) W 1 i =2λ 2a i 2B i δ [(β j c ij ) 2pij (β i c ji ) 2 2pji e [(β 2λτ j ω ij ) 2p ij 1 ρ (β i ω ji ) 2 2p ji

146 PING YAN AND TENG LV or (1) W 2 i [(β j b ij ) 2rij (β i b ji ) 2 2rji [ M ij 2r ij Mji 2 2r ji. =2λ 2a i [(β j c ij ) 2pij (β i c ji ) 2 2pji e [(β 2λτ j ω ij ) 2p ij 1 ρ (β i ω ji ) 2 2p ji [(β j b ij ) 2rij (β i b ji ) 2 2rji [ M ij 2r ij Mji 2 2r ji. κ ji (s)e 2λs ds κ ji (s)e 2λs ds Now we begin to prove Theorem 1. ProofofTheorem1. Taking the Laypunov functional as follows: [ V (t) = ε 2 i (x, t)e2λt 1 1 ρ (β j ω ij ) 2 2p t ij t t ij(t) (β j b ij ) 2 2rij κ ij (s) t t s ε 2 j(x, s)e 2λ(sτ ) ds ε 2 j(x, z)e 2λ(zs) dz ds dx. Calculating D V (t) along the solution to system (8), we have [ D V (t) = 2λε 2 i (x, t)e2λt 2ε i (x, t)e 2λt ε i(x, t) t

NEURAL NETWORK EXPONENTIAL SYNCHRONIZATION 147 < = ( (β j ω ij ) 2 2p ij e 2λ(tτ ) 1 ρ ε2 j (x, t) e2λt e 2λ(τ tij(t)) 1 ρ (1 t ) ij (t))ε 2 j (x, t t ij (t)) (β j b ij ) 2 2rij κ ij (s)(ε 2 j (x, t)e 2λ(ts) ε 2 j(x, t s)e 2λt ) ds dx [ 2λε 2 i (x, t)e 2λt 2ε i (x, t)e 2λt ε i(x, t) t (β j ω ij ) 2 2p ij e 2λ(tτ ) 1 ρ ε2 j (x, t) e2λt ε 2 j (x, t t ij (t)) (β j b ij ) 2 2rij κ ij (s)(ε 2 j (x, t)e2λ(ts) { 2λε 2 i (x, t)e 2λt 2ε i (x, t)e 2λt { m ε 2 j (x, t s)e2λt ) ds dx ε i (x, t) D i a i ε i (x, t) c ij (f j (u j (x, t)) f j (ũ j (x, t))) [ ω ij (f j (u j (x, t t ij (t))) f j (ũ j (x, t t ij (t)))) } b ij κ ij (t s)(f j (u j (x, s)) f j (u j (x, s))) ds v i (β j ω ij ) 2 2p ij e 2λ(tτ ) 1 ρ ε2 j (x, t) e2λt ε 2 j (x, t t ij (t))

148 PING YAN AND TENG LV (β j b ij ) 2 2rij κ ij (s)(ε 2 j (x, t)e2λ(ts) { 2λε 2 i (x, t)e2λt 2ε i (x, t)e 2λt } ε 2 j (x, t s)e2λt ) ds dx m ε i (x, t) D i 2a i e 2λt ε 2 i (x, t)2 ε i(x, t) e 2λt [ β j c ij ε j (x, t) β j ω ij ε j (x, t t ij (t)) β j b ij κ ij (s) ε j (x, t s) ds M ij ε j (x, t) (β j ω ij ) 2 2p ij e 2λ(tτ ) 1 ρ ε2 j(x, t) e 2λt ε 2 j(x, t t ij (t)) (β j b ij ) 2 2rij κ ij (s)(ε 2 j (x, t)e 2λ(ts) } ε 2 j(x, t s)e 2λt ) ds dx. It is easy to see that 2β j c ij ε i (x, t) ε j (x, t) (β j c ij ) 2pij ε 2 i (x, t)(β j c ij ) 2 2pij ε 2 j(x, t), 2β j ω ij ε i (x, t) ε j (x, t t ij (t)) (β j ω ij ) 2p ij ε 2 i (x, t)(β j ω ij ) 2 2p ij ε 2 j (x, t t ij (t)), 2β j b ij ε i (x, t) ε j (x, t s) (β j b ij ) 2rij ε 2 i (x, t)(β j b ij ) 2 2rij ε 2 j (x, t s), 2 M ij ε i (x, t)ε j (x, t) M ij 2r ij ε 2 i (x, t) M ij 2 2r ij ε 2 j (x, t). By using the Green formula, one has (12) m ε i (x, t) D i ε i (x, t) dx

NEURAL NETWORK EXPONENTIAL SYNCHRONIZATION 149 = m m ( ) ε i (x, t) D i ε i (x, t) dx ε i (x, t) ε i (x, t) D i dx m ε i (x, t) = D i ε i (x, t) m 2 εi (x, t) D i dx = D i ε i (x, t) ε i(x, t) n cos (x k,n) ds 2 εi (x, t) ds D i dx. x And, noting the boundary condition, if σ i1 =(i =1,...,n)and(H1) hold, by D i B i and Lemma 1, from (12), we have (13) m ε i (x, t) D i ε i (x, t) dx = B i δ 2 εi (x, t) D i dx x ε 2 i (x, t) dx, where δ = ( /ω m ) 1/m >. And, further noting the boundary condition, if σ i1 (i =1,...,n), by D i and ε i (x, t)/ n = (σ i2 /σ i1 )ε i (x, t), from (12), we have (14) m ε i (x, t) σ i2 D i ε i (x, t) dx = D i ε 2 i (x, t) ds σ i1 2 εi (x, t) D i dx. x Hence, if σ i1 =(i =1,...,n)and(H1) hold, it follows from (11) and (13) that (15) D V (t) < [ {e 2λt ε 2i (x, t) 2λ 2a i 2B i δ ((β j c ij ) 2pij

15 PING YAN AND TENG LV (β i c ji ) 2 2pji (β j ω ij ) 2p ij (βj b ij ) 2rij M 2r ij ij M 2 2r ji ji ) e 2λt[ (β j ω ij ) 2 2p ij ε 2 j (x, t t ij (t)) (β j b ij ) 2 2rij κ ij (s)ε 2 j (x, t s) ds e 2λt (β j ω ij ) 2 2p ij e 2λτ 1 ρ ε2 j(x, t) ε 2 j(x, t t ij (t)) e 2λt (β j b ij ) 2 2rij κ ij (s)(ε 2 j(x, t)e 2λs n = e 2λt M 2r ij ij =2e 2λt n [ ε 2 i (x, t) 2λ 2a i 2B i δ } ε 2 j(x, t s)) ds dx ( (β j c ij ) 2pij (β i c ji ) 2 2pji (β j ω ij ) 2p ij (βj b ij ) 2rij M 2 2r ji ji e2λτ 1 ρ (β i ω ji ) 2 2p ji ) (β i b ji ) 2 2rji κ ji (s)e 2λs ds dx Wi 1 ε 2 i (x, t) dx, where W 1 i is defined in (9). In addition, if σ i1 (i =1,,n), it follows from (11) and (14) that (16) D V (t) < {e 2λt ε 2i (x, t) [ 2λ 2a i ((β j c ij ) 2pij (β i c ji ) 2 2pji (β j ω ij ) 2p ij (βj b ij ) 2rij

NEURAL NETWORK EXPONENTIAL SYNCHRONIZATION 151 M 2r ij ij M 2 2r ji ji ) e 2λt[ (β j ω ij ) 2 2p ij ε 2 j (x, t t ij (t)) (β j b ij ) 2 2rij κ ij (s)ε 2 j (x, t s) ds e 2λt (β j ω ij ) 2 2p ij e 2λτ 1 ρ ε2 j (x, t) ε2 j (x, t t ij (t)) e 2λt (β j b ij ) 2 2rij n = e 2λt ε 2 i (x, t) [2λ 2a i κ ij (s)(ε 2 j (x, t)e2λs ε 2 j (x, t s)) ds } dx ( (β j c ij ) 2pij (β i c ji ) 2 2pji (β j ω ij ) 2p ij (βj b ij ) 2rij M 2r ij ij e2λτ 1 ρ (β i ω ji ) 2 2p ji (βi b ji ) 2 2rji n =2e 2λt Wi 2 ε 2 i (x, t) dx, M 2 2r ji ji κ ji (s)e 2λs ds) dx where W 2 i is defined in (1). From (15) and (16), we know that, for the general boundary conditions, the following inequality always holds. V (t) V () = [ ϕ i (x, ) ψ i (x, ) 2 (β j ω ij ) 2 2p ij ϕ j (x, s) ψ j (x, s) 2 e 2λ(sτ ) ds 1 ρ t ij() (β j b ij ) 2 2rij κ ij (s) ϕ j (x, z) s ψ j (x, z) 2 e 2λ(zs) dz ds dx

152 PING YAN AND TENG LV = [ ϕ i (x, ) ψ i (x, ) 2 (β j ω ij ) 2 2p ij t ij() Let (17) Υ = 1 ρ ϕ j (x, s t ij ()) ψ j (x, s t ij ()) 2 (β j b ij ) 2 2rij κ ij (s) max i s { 1 e 2λ(s t ij ()τ ) ds ϕ j (x, z s) ψ j (x, z s) 2 e 2λz dz ds dx. [ e 2λτ (β i ω ji ) 2 2p ij 1 ρ 2λ 1 2λ (β i b ji ) 2 2rji For V (t) ε(,t) 2 2e 2λt, the following inequality holds. ε(,t) 2 2 Υ 2 ϕ ψ 2 2e 2λt, } κ ji (s)e 2λs ds > 1. that is, u(,t) ũ(,t) 2 Υ ϕ ψ 2 e λt, and, by Definition 1, we know that the drive-response neural networks are exponentially synchronized. So we are finished with the proof of Theorem 1. Remark 6. If σ i2 = and σ i1 (i = 1,...,n), the result of Theorem 1 is that in [17. Furthermore, suppose that b ij = (i, j =1,...,n); the result of Theorem 1 is that in [9, 22. And, if σ i1 =andσ i2 (i =1,...,n), the result of Theorem 1 is that in [24.

NEURAL NETWORK EXPONENTIAL SYNCHRONIZATION 153 4. Example. In this section, let us discuss a numerical example as follows. Example 1. Consider delayed reaction-diffusion neural networks with general boundary conditions: (18) u i (x, t) = 2 u i (x, t) t x 2 a i u i (x, t) 2 2 c ij f j (u j (x, t)) ω ij f j (u j (x, t t j )) 2 b ij u i (x, t) σ i1 σ i2 u i (x, t) =, n (x,σ i1, σ i2 andσi1 2 σi2 2 ), i =1, 2. κ ij (t s)f j (u j (x, s)) ds I i, i =1, 2, u i (x, t) =ϕ i (x, t), <t, x, i =1, 2, The corresponding response neural networks can be expressed as follows. (19) ũ i (x, t) = 2 ũ i (x, t) t x 2 a i ũ i (x, t) 2 2 c ij f j (ũ j (x, t)) ω ij f j (ũ j (x, t t j )) 2 b ij κ ij (t s)f j (ũ j (x, s)) ds I i v i, i =1, 2, ũ i (x, t) =ψ i (x, t), <t, x,, 2, ũ i (x, t) σ i1 σ i2 ũ i (x, t) =, n (x,σ i1,σ i2 andσi1 2 σi2 2 ),, 2. Let be a bounded open domain with 1, and we can have δ = 1 and B i = 1 (i = 1, 2). Let a 1 = 5.3 and a 2 = 5.25; f i =tanhu i (x, t)(i =1, 2) satisfy (H2) with β 1 =1,β 2 =1;c 11 =1/6,

154 PING YAN AND TENG LV c 12 =1/3, c 21 =1/6, c 22 =2/3, ω 11 =1/2, ω 12 =1/3, ω 21 =2/3, ω 22 =1/2andb 11 =1,b 12 =1/3,b 21 =1/2, b 22 =1/3. Let ρ =1/2and κ ij = e t (i, j =1, 2) which satisfy (5) (7). Let M 11 =1,M 12 =1/6, M 21 =1/6, M 22 =1/2 andi 1 = I 2 =. By a simple calculation, we can choose p ij = p ij = r ij = rij =1/2 (i, j =1, 2) such that and 2a 1 2B 1 δ [ β j c 1j β 1 c j1 β j ω 1j 1 1 ρ (β 1 ω j1 )β j b 1j β 1 b j1 M 1j M j1 [ 2a 1 β j c 1j β 1 c j1 β j ω 1j 1 1 ρ (β 1 ω j1 )β j b 1j β 1 b j1 M 1j M j1 = 2 5.3[1/61/31/61/61/21/32 (1/22/3)11/311/211/611/6 = 1.69 1 6 <. 2a 2 2B 2 δ [ β j c 2j β 2 c j2 β j ω 2j 1 1 ρ (β 2 ω j2 )β j b 2j β 2 b j2 M 2j M j2 [ 2a 2 β j c 2j β 2 c j2 β j ω 2j 1 1 ρ (β 2 ω j2 )β j b 2j β 2 b j2 M 2j M j2 = 2 5.25 [1/62/31/32/32/31/22 (1/31/2) 1/21/31/31/31/61/21/61/2 = 1.57.5 <. These inequalities mean that (H4) 1 and (H5) 1 hold for this example. It follows from Corollary 1 that the drive-response neural networks (18) and (19) are exponentially synchronized.

NEURAL NETWORK EXPONENTIAL SYNCHRONIZATION 155 5. Conclusions. In this paper, by using some inequality techniques and constructing a suitable Lyapunov functional, some sufficient conditions have been given to ensure the drive-response neural networks (1) and (2) to be exponentially synchronized. These conditions are easy to verify as they are expressed by algebraic inequalities. The methods used here can be used to deal with the exponential synchronization of general problems. Our new results for delayed reaction-diffusion neural networks with general boundary conditions generalize the corresponding ones in [9, 17, 22, 24. Acknowledgments. The author gives her thanks with a grateful heart to Associate Editor Kening Lu and the referee for their helpful advice. REFERENCES 1. C. Cheng, T. Liao, J. Yan and C. Hwang, Exponential synchronization of a class of neural networks with time-varying delays, IEEE Trans. Syst., Man, Cyber., Part B: Cybernetics 36 (26), 29 215. 2. M.A. Cohen and S. Grossberg, Absolute stability and global pattern formation and parallel memory storage by competitive neural networks, IEEE Trans. Syst. Man Cyber. 13 (1983), 815 826. 3. D. Gilbarg and N.S. Trudinger, Elliptic partial differential equations of second order, 3rd edition, Springer-Verlag, Berlin, 21. 4. K. Li and Q. Song, Exponential stability of impulsive Cohen-Grossberg neural networks with time-varying delays and reaction-diffusion terms, Neurocomputing 72 (28), 231 24. 5. J. Liang and J. Cao, Global exponential stability of reaction-diffusion recurrent neural networks with time-varying delays, Phys. Lett. 314 (23), 434 442. 6. X. Liao, C. Li and K.W. Wong, Criteria for exponential stability of Cohen- Grossberg neural networks, NeuralNetworks17 (24), 141 1414. 7. X. Lou and B. Cui, Delay-dependent stochastic stability of delayed Hopfield neural networks with Markovian jump parameters, J. Math. Anal. Appl. 328 (27), 316 326. 8., Stochastic exponential stability for Markovian jumping BAM neural networks with time-varying delays, IEEE Trans. Syst., Man, Cyber., Part B: Cybernetics 37 (27), 713 719. 9., Asymptotic synchronization of a class of neural networks with reaction-diffusion terms and time-varying delays, Comput. Math. Appl. 52 (26), 897 94. 1. G. Lu, Global exponential Stability and periodicity of reaction-diffusion delayed recurrent neural networks with Dirichlet boundary conditons, Chaos Solit. Fract. 35 (28), 116 125.

156 PING YAN AND TENG LV 11. H. Lu and C.V. Leeuwen, Synchronization of chaotic neural networks via output or state coupling, Chaos, Solit. Fract. 3 (26), 166 176. 12. L.M. Pecora and T.L. Carroll, Synchronization in chaotic systems, Phys. Rev. Lett. 64 (199), 821 824. 13. L.M. Pecora, T.L. Carroll, G.A. Johnson, D.J. Mar and J.F. Heagy, Fundamentals of synchronization in chaotic systems, concepts and applications, Chaos 7 (1997), 52 543. 14. J. Qiu, Exponential stability of impulsive neural networks with time-varying delays and reaction-diffusion terms, Neurocomputing 7 (27), 112 118. 15. J. Qiu, Y. Jin and Q. Zheng, Delay-dependent global asymptotic stability in neutral-type delayed neural networks with reaction-diffusion terms, inproc. of international symposium on neural networks, Vol. 1, Springer, Beijing, 28. 16. J. Qiu and Q. Ren, Robust stability in interval delayed neural networks of neutral type, inproc. of international conference on intelligent computing, Part1, Springer, Kunming, 26. 17. L. Sheng, H. Yang and X. Lou, Adaptive exponential synchronization of delayed neural networks with reaction-diffusion terms, Chaos, Solit. Fract. 25 (27). 18. Q. Song, J. Cao and Z. Zhao, Periodic solutions and its exponential stability of reaction-diffusion recurrent neural netwokes with continuously distributed delays, in Nonlinear analysis: Real world applications 7 (26), 65 8. 19. Q. Song and J. Zhang, Global exponential stability of impulsive Cohen- Grossberg neural network with time-varying delays, in Nonlinear analysis: Real world applications 9 (28), 5 51. 2. Q. Song, Z. Zhao and Y. Li, Global exponential Stability of BAM neural networks with distributed delays and reaction-diffusion terms, Phys. Lett. 335 (25), 213 225. 21. Y. Sun and J. Cao, Adaptive lag synchronization of unknown chaotic delayed recurrent neural networks with noise perturbation, Phys. Lett. 364 (27), 277 285. 22. Y. Wang and J. Cao, Synchronization of a class of delayed neural networks with reaction-diffusion terms, Phys. Lett. 369 (27), 21 211. 23. Z. Wang, Y. Liu and X. Liu, On global asymptotic stability of neural networks with discrete and distributed delays, Phys. Lett. 345 (25), 299 38. 24. K. Wang, Z. Teng and H. Jiang, Adaptive synchronization of neural networks with time-varying delay and distributed delay, Physica Statist. Mech. Appl. 387 (28), 631 642. 25. W. Xiong, W. Xie and J. Cao, Adaptive exponential synchronization of delayed chaotic networks, Physica37 (26), 832 842. 26. P. Yan and T. Lv, Stability of delayed reaction diffusion high-order Cohen- Grossberg neural networks with variable coefficient, inproc.ofsecondinternational symposium on intelligent information technology application, Vol. 1, IEEE, Wuhan, 28. 27. K. Yuan, J. Cao and H.X. Li, Robust stability of switched Cohen-Grossberg neural networks with mixed time-varying delays, IEEE Trans. Syst., Man, Cyber.: Cybernetics 36 (26), 1356 1363.

NEURAL NETWORK EXPONENTIAL SYNCHRONIZATION 157 28. J. Zhang, Y. Suda and H. Komine, Global exponential stability of Cohen- Grossberg neural networks with variable delays, Phys. Lett. 338 (25), 44 5. 29. H. Zhao and G. Wang, Existence of periodic oscillatory solution of reactiondiffusion neural networks with delays, Phys. Lett. 343 (25), 372 383. 3. H. Zhao and K. Wang, Dynamical behaviors of Cohen-Grossberg neural networks with delays and reaction-diffusion terms, Neurocomputing 7 (26), 536 543. School of Science, Anhui Agricultural University, Hefei 2336, P.R. China Email address: want2fly22@163.com Teaching and Research Section of Computer, Army Officer Academy, Hefei 2331, P.R. China Email address: lt41@163.com