Analysis of Neural Networks with Chaotic Dynamics

Similar documents
Effects of refractory periods in the dynamics of a diluted neural network

Effects of Interactive Function Forms in a Self-Organized Critical Model Based on Neural Networks

arxiv:chao-dyn/ v1 5 Mar 1996

The Mixed States of Associative Memories Realize Unimodal Distribution of Dominance Durations in Multistable Perception

Effects of Interactive Function Forms and Refractoryperiod in a Self-Organized Critical Model Based on Neural Networks

Asynchronous updating of threshold-coupled chaotic neurons

Bursting and Chaotic Activities in the Nonlinear Dynamics of FitzHugh-Rinzel Neuron Model

Controlling chaos in random Boolean networks

Using a Hopfield Network: A Nuts and Bolts Approach

A Novel Chaotic Neural Network Architecture

IN THIS PAPER, we consider a class of continuous-time recurrent

(a) (b) (c) Time Time. Time

Influence of Criticality on 1/f α Spectral Characteristics of Cortical Neuron Populations

Hopfield Neural Network and Associative Memory. Typical Myelinated Vertebrate Motoneuron (Wikipedia) Topic 3 Polymers and Neurons Lecture 5

Chaotic motion. Phys 750 Lecture 9

Logic Learning in Hopfield Networks

arxiv:quant-ph/ v1 17 Oct 1995

Learning and Memory in Neural Networks

Consider the following spike trains from two different neurons N1 and N2:

7 Rate-Based Recurrent Networks of Threshold Neurons: Basis for Associative Memory

Introduction to Dynamical Systems Basic Concepts of Dynamics

7 Recurrent Networks of Threshold (Binary) Neurons: Basis for Associative Memory

898 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 17, NO. 6, DECEMBER X/01$ IEEE

Two Decades of Search for Chaos in Brain.

Self-organized Criticality and Synchronization in a Pulse-coupled Integrate-and-Fire Neuron Model Based on Small World Networks

Chaotic Neurodynamics for Autonomous Agents

Behaviour of simple population models under ecological processes

Memories Associated with Single Neurons and Proximity Matrices

Introduction to Neural Networks

Chaotic motion. Phys 420/580 Lecture 10

Artificial Intelligence Hopfield Networks

Probabilistic Models in Theoretical Neuroscience

CONTROLLING CHAOS. Sudeshna Sinha. The Institute of Mathematical Sciences Chennai

DESYNCHRONIZATION TRANSITIONS IN RINGS OF COUPLED CHAOTIC OSCILLATORS

2 One-dimensional models in discrete time

Switching in Self-Controlled Chaotic Neuromodules

Motivation. Lecture 23. The Stochastic Neuron ( ) Properties of Logistic Sigmoid. Logistic Sigmoid With Varying T. Logistic Sigmoid T = 0.

Learning Cellular Automaton Dynamics with Neural Networks

Igor A. Khovanov,Vadim S. Anishchenko, Astrakhanskaya str. 83, Saratov, Russia

On the Dynamics of Delayed Neural Feedback Loops. Sebastian Brandt Department of Physics, Washington University in St. Louis

REAL-TIME COMPUTING WITHOUT STABLE

Controlling the cortex state transitions by altering the oscillation energy

Dynamical Embodiments of Computation in Cognitive Processes James P. Crutcheld Physics Department, University of California, Berkeley, CA a

Noise-enhanced propagation in a dissipative chain of triggers

Stochastic Model for Adaptation Using Basin Hopping Dynamics

Vertex Routing Models and Polyhomeostatic Optimization. Claudius Gros. Institute for Theoretical Physics Goethe University Frankfurt, Germany

Using Variable Threshold to Increase Capacity in a Feedback Neural Network

Two dimensional synaptically generated traveling waves in a theta-neuron neuralnetwork

Causality and communities in neural networks

Stochastic Oscillator Death in Globally Coupled Neural Systems

Structural Properties of Generative Form by Hormonal Proliferation Algorithm

J. Marro Institute Carlos I, University of Granada. Net-Works 08, Pamplona, 9-11 June 2008

Several ways to solve the MSO problem

NEURAL NETWORK CHAOS ANALYSIS. A. Garliauskas. Mathematics and Informatics Institute, Akademijos 4, 2600 Vilnius, Lithuania.

Oscillation Onset In. Neural Delayed Feedback

Rotational Number Approach to a Damped Pendulum under Parametric Forcing

AN ELECTRIC circuit containing a switch controlled by

Emergence of resonances in neural systems: the interplay between adaptive threshold and short-term synaptic plasticity

INTRODUCTION TO CHAOS THEORY T.R.RAMAMOHAN C-MMACS BANGALORE

Evolutionary Games and Computer Simulations

Computation with phase oscillators: an oscillatory perceptron model

NONLINEAR DYNAMICS PHYS 471 & PHYS 571

Introduction. Previous work has shown that AER can also be used to construct largescale networks with arbitrary, configurable synaptic connectivity.

arxiv: v1 [nlin.cd] 20 Jul 2010

Dynamical computation reservoir emerging within a biological model network

Neuron, Volume 63. Supplemental Data. Generating Coherent Patterns of Activity. from Chaotic Neural Networks. David Sussillo and L.F.

Biological Modeling of Neural Networks:

Homework 2 Modeling complex systems, Stability analysis, Discrete-time dynamical systems, Deterministic chaos

Multilayer Perceptron

Invariant manifolds of the Bonhoeffer-van der Pol oscillator

J1.2 Short-term wind forecasting at the Hong Kong International Airport by applying chaotic oscillatory-based neural network to LIDAR data

The effect of emigration and immigration on the dynamics of a discrete-generation population

Chimera states in networks of biological neurons and coupled damped pendulums

An Introductory Course in Computational Neuroscience

λ-universe: Introduction and Preliminary Study

DEVS Simulation of Spiking Neural Networks

Chapitre 4. Transition to chaos. 4.1 One-dimensional maps

A Two-dimensional Discrete Mapping with C Multifold Chaotic Attractors

Nonlinear Dynamics and Chaos Summer 2011

Design of Oscillator Networks for Generating Signal with Prescribed Statistical Property

COMP9444 Neural Networks and Deep Learning 11. Boltzmann Machines. COMP9444 c Alan Blair, 2017

Neural networks: Unsupervised learning

Hopfield Networks. (Excerpt from a Basic Course at IK 2008) Herbert Jaeger. Jacobs University Bremen

Difference Resonances in a controlled van der Pol-Duffing oscillator involving time. delay

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

exponents and mode-locked solutions for integrate-and-fire dynamical systems

in a Chaotic Neural Network distributed randomness of the input in each neuron or the weight in the

A CNN APPROACH TO BRAIN-LIKE CHAOS-PERIODICITY TRANSITIONS

Frequency Selective Surface Design Based on Iterative Inversion of Neural Networks

Mathematical Foundations of Neuroscience - Lecture 7. Bifurcations II.

Artificial Neural Networks Examination, June 2005

Neural Nets in PR. Pattern Recognition XII. Michal Haindl. Outline. Neural Nets in PR 2

Neural Excitability in a Subcritical Hopf Oscillator with a Nonlinear Feedback

Chaos. Dr. Dylan McNamara people.uncw.edu/mcnamarad

Experimental and numerical realization of higher order autonomous Van der Pol-Duffing oscillator

Quantum stochasticity and neuronal computations

Enhanced sensitivity of persistent events to weak forcing in dynamical and stochastic systems: Implications for climate change. Khatiwala, et.al.

Phase Desynchronization as a Mechanism for Transitions to High-Dimensional Chaos

Simplest Chaotic Flows with Involutional Symmetries

Neural Nets and Symbolic Reasoning Hopfield Networks

Transcription:

Chaos, Solitonr & Fructals Vol. 3, No. 2, pp. 133-139, 1993 Printed in Great Britain @60-0779/93$6.00 + 40 0 1993 Pergamon Press Ltd Analysis of Neural Networks with Chaotic Dynamics FRANCOIS CHAPEAU-BLONDEAU Institut de Biologie Theorique, Universite d Angers, 10 rue Andre Boquel, 49100 Angers, France (Received 21 October 1992) Abstract-We consider here simple neural network models, with continuous positive bounded activities and non-symmetric synaptic connections. We investigate the various dynamic regimes which are accessible for these systems. We show that unstable dynamics are very probable compared to stable dynamics. Among unstable dynamics, oscillatory (periodic or quasi-periodic) as well as chaotic dynamics are exhibited and analyzed. Situations are described where oscillatory dynamics are used to implement controllable neural pacemakers. Chaotic dynamics are shown to exhibit classical characteristics of deterministic chaos, like bifurcation diagrams, fractal attractors, and sensitive dependence on initial conditions. This last property sets a limit to any long term prediction concerning the evolution of the network in a chaotic regime 1. INTRODUCTION Neural network models incorporate some basic features of biological neural assemblies (high connectivity in a net, weighted sums of activities, nonlinear-response units, etc.). From these basic elements, their properties are developed, with the aim of capturing significant global properties of biological neural systems. A class of models that has been extensively studied is the one formed by networks with discrete output activities and symmetric synaptic connections [l]. For these models, an energy function can be introduced with acts as a Liapunov function for the dynamics, and ensures that the system always evolves to reach a stable state. These type of attractor neural networks can be studied with statistical mechanics methods (spin glass analogy) [2], and offer a framework to represent memory processes in neural systems. These models can be made more realistic from a biological standpoint. In this direction, we consider here neural networks with continuous activities and asymmetric synaptic connections. A biological neuron can either be quiescent or firing with a possibly varying but bounded intensity. Accordingly, its output activity will be represented by a continuous variable between 0 and 1, which therefore cannot go negative. We add no extra conditions to prescribe values for the synapses, or dilute them as done for instance by Stariolo [3]. We then study the different dynamics that are accessible for these networks devoid of an energy function. With arbitrary synaptic connections, we show that stability is not preserved and that unstable dynamics are very probable. Among these unstable dynamics, oscillatory (periodic or quasi-periodic) as well as chaotic evolutions are exhibited. Chaos, which has been experimentally observed in biological neural systems [4-61, can also be obtained from theoretical models in appropriate conditions. Recent theoretical works have reported chaos in neural networks, but with models of more complicated structures, e.g. with neurons with complex internal states [7], or with models where chaos is detected in quantities having no direct physical interpretation [8,9], or in stochastic networks at the thermodynamic limit [lo]. These contexts tend to obscure the minimal conditions in which chaos may occur in neural networks. In addition, simple mathematical 133

134 F. CHAPEAU-BLONDEAU models which are devised to generate chaos [ll], always rely, as a key element, on non-monotone nonlinearities. In neural networks, the basic nonlinearities are that of the input-output neuron transfer function, which is in essence a monotone nonlinearity. It is thus not a priori obvious if, and how, chaos may occur in these systems. We show here direct evidence of chaos, in individual neuron output activities, in simple networks with no restrictive conditions on synaptic values. The present work contains a generalization to a broad class of neural networks with unspecific architecture, of properties that we observed in small neural circuits with no more than two or three neurons associated in very specific architectures [ 121. 2. THE NETWORK MODEL We consider fully connected networks of N nonlinear neuron-like elements. J,j is the weight of the synaptic connection from neuron i to neuron i. We assume that the synapses are devoid of any intrinsic plasticity, because we want to focus the analysis on the dynamics of the neuron activities, in comparison to which the dynamics of the synapses can be considered as quasistatic. Si(t) is the output activity of neuron i at time t. The neurons have a classic sigmoid transfer function, whose expression, for neuron i, is given by equation (1). Both the threshold 8i and the slope @, of the sigmoid can be considered as adjustable parameters for neuron i. f,(h) = [l + exp - /li(h - 0,)]-. (I) We choose a discrete-time dynamics for the evolution of the neuron activities, governed by equation (2), with either parallel or sequential updating of the activities. (We observed that the updating mode does not qualitatively affect the global dynamic properties of the networks). This type of discrete-time dynamics seeks to model the effect of delays in the transmission of neural signals [12]. The iteration step At = 1 is interpreted as an average transmission delay of the neural signals. A valuable aspect of this type of elementary neural network model, is that a wide range of dynamic behavior is accessible. In contrast with Hopfield networks [l, 131, no general conditions of stability exist for this category of models. Depending on the values assigned to the parameters Jjj, 8, and Bi, both stable and unstable dynamics can result. (2) 3. MONTE CARLO EVALUATION OF THE STABILITY A Monte Carlo approach was used to evaluate the relative likelihood of stable and unstable dynamics under various conditions for assigning values to the parameters of the network. Figure 1 represents an evaluation of the probability of stability as a function of the number N of neurons in the network. The curves were obtained as follows. For a given number N of neurons, a large number of networks were constructed by randomly selecting the JtjS from the interval [-1, l] with uniform probability, and with Bi = 0 and /3i = 5 for all the neurons. The stability probability was then evaluated as the frequency of stability over the networks of the statistical ensemble. For each network of the ensemble, initial conditions were selected randomly, and the dynamics described by equations (1 and 2) were allowed to evolve for a sufficient time, after which the outputs Si(t) were tested to check if they had reached a stable state or not. Figure l(a) represents the probability that a

Neural networks with chaotic dynamics 135 20 40 60 60 100 120 60 60 100 1 numberofneurons numberofneurons 0 Fig. 1. An evaluation of the probability of stability for: (a) a single neuron; and (b) all the neurons of the network, as a function of the number N of neurons in the network. given neuron output,s,(t) shows a stable time evolution, Fig. l(b) the probability that all neuron outputs in the network be stable. From the two curves in Fig. 1, it is clear that stability is not at all a guaranteed property for the considered neural systems. Very specific choices of the synaptic weight must be achieved if stability is required, to implement memory processes for instance. The precise shape of the stability curves of Fig. 1 evolve with the values of the parameters 8, and pi. So far, no general neurophysical statistical theory has been found, that could predict these stability curves for different classes of neural networks. 4. CHARACTERIZATION OF UNSTABLE DYNAMICS The unstable dynamics that are accessible were found to exhibit qualitatively different natures. We observed periodic, quasi-periodic, as well as chaotic evolutions. Figure 2 shows typical examples of unstable evolutions of the activity s,(t) in a network of three neurons, with 19, = 0 and pi = 5 for all the neurons, and with synaptic weights Jll between - 1 and + 1. The dynamic regime, from periodicity in Fig. 2(a) to chaos in Fig. 2(d), is changed here by alteration of only one of the parameters of the network (namely J13), while all the other parameters remain fixed. The characteristics of a given time evolution (amplitude, repetition rate, shape, etc.) can be changed through modifications of parameters of the system. For instance, for oscillatory regimes, conditions were found in which a neuron generates bursts of high activity, separated by intervals of low activity, with a repetition rate adjustable through alteration of a single synaptic weight or a neuron threshold. This type of mechanism implements controllable neural pacemakers. For chaotic regimes, we have verified that classical characteristics of deterministic chaos [14] are present, such as bifurcation diagrams (Fig. 3), sensitive dependence on initial conditions with Liapunov exponents (Fig. 4), and fractal attractors (Figs 5 and 6). A time evolution of a given signal si(t) can be characterized by the phase portrait of its attractor in the phase plane spanned by the quantities {si(t), si(t + 1)). The typical situations that were observed are: -stable or periodic regimes, with phase portraits that reduce to a finite set of points in the phase plane {si(t), si(t + 1)); -quasi-periodic regimes, with phase portraits that describe a continuous closed curve in the phase plane;

136 F. CHAPEAU-BLONDEAU OoL-T-, a 30 40 50 60 70 60 I 00 10 20 30 40 50 60 70 60 S, I!!-. 00 10 20 30 40 50 60 70 60 b 00 10 20 --T - 40 50 60 70 60 time t Fig. 2. Different time evolutions of the output activity S,(t) in a three-neuron network, obtained for different values of the synaptic weight Jls, all other parameters remaining fixed. (a) Periodic regime; (b) quasi-periodic regime; (c) bursting chaotic regime; (d) chaotic regime. I 01 I I 1 0.3 synaptic weight 0.8 Fig. 3. Bifurcation diagram of a neuron output activity Si(t) as a function of a synaptic weight Jij chosen as control parameter, all other parameters of the network remaining fixed. The activity evolves from stability to chaos through a cascade of period doubling. -low-dimensional chaos, with phase portraits that form a fractal set of dimension between 1 and 2 in the phase plane, as in Fig. 5 (left); -high-dimensional chaos, with phase portraits that densely fill finite regions of the phase plane, as in Fig. 5 (right). It was possible, for a given network, to transform one type of phase portrait into another by continuously changing the parameters 1,. For these neural networks, it is also possible to select as phase variables, two arbitrary output activities Si( t) and Sj( t) evaluated at the same time t. The resulting attractors in the phase plane {Si(t)T S,(t)} are also suitable to reveal the existence of low-dimensional chaos, as shown in Fig. 6. These type of attractors, as in Fig. 6, can be interpreted as pointing out a coherence or correlation in the time evolution of a pair of output activities. If these two variables were completely independent one could expect the attractor to densely fill a

Neural networks with chaotic dynamics 137 time t Fig. 4. Sensitive dependence on initial conditions for an output activity si(t) in a chaotic regime: the diagram shows the time evolution of the distance d(t) between two distinct sequences of iterates initiated at t = 0 with initial values for Sr distant of 10-14. A Liapunov exponent of 0.62 can be deduced for the the dynamics. 1 A 7 t ti- 0 w 1 Fig. 5. Two examples of the phase portrait of the attractor for the dynamics of an output activity s,(t) in networks of no more than 10 neurons and displayed in the phase plane spanned by {s,(t), St(r + l)}. The folded structure (left) and self-similarity (right) are characteristic of fractal attractors. region of the phase plane. Conversely, if these two variables were connected by a fixed functional dependence, one would expect a regular one-dimensional curve as attractor. The fractal attractors of Fig. 6 describe an intermediate situation, characterizing the complex connection that can be found between the output activities belonging to a same neural network. 5. DISCUSSION The reported results show that qualitatively different dynamic regimes can exist in neural networks, even with very simple structures. It is interesting to note that this dynamic variability arises here from the sole evolution of the neuron activities; no intrinsic synaptic plasticity need be incorporated to obtain complex dynamic behavior. This contrasts with recent models, that incorporate plasticity and dilution of synapses [3], or rely on complex

F. CHAPEAU-BLONDEAU 0 S,(t) 1 0 S,(t) 1 Fig. 6. Two examples of the phase portrait of the attractor for the dynamics of a lo-neuron network, and displayed in the phase plane spanned by the two phase variables {S,(t), S,(t)}. To go from left to right all the synaptic weights of the network were increased by a small amount of 0.04. The thinly folded structure of the attractors is characteristic of fractal sets. architectures, or complex equations for both neuron and synapse dynamics [9,10,15], in order to generate chaotic evolutions in a neural network. The dynamic regime of a network can be changed through modifications of internal or external parameters, such as synaptic weights or external neuron inputs (equivalent to the alteration of neuron thresholds). Thus, for instance, through synaptic plasticity, the system can learn to adopt a specific dynamic regime in given conditions. Any one of the observed regimes possesses some degree of structural stability, for it can subsist over finite ranges of parameter values. With such properties, the different dynamic regimes of a neural network, provide frameworks to represent various cognitive functions. Stable regimes, which are the most often used so far, and to implement memory processes, may not be the only useful ones. Periodic and quasi-periodic regimes offer schemes for the control of rhythmic biological functions such as respiration or locomotion. As mentioned in Section 4, the neural networks discussed here can behave, under specific conditions, as internally or externally controllable oscillators. The role of chaotic regimes in biological neural systems is not yet fully perceived. Nevertheless, it is now clear, both from experimental [4-6, 161 and theoretical evidence, that chaotic dynamics can exist in neural systems, even with simple structures, as demonstrated here. The possibility of chaotic dynamics in neural networks, because of the sensitive dependence on initial conditions, imposes a limit to any long term prediction concerning the evolution of the system. The stability curves of Fig. 1 and their extrapolation to networks of biological size, indicate that the occurrence of an unstable chaotic regime is very probable in networks where the synapses are not strongly constrained. In a chaotic regime, the macroscopic states of activity of the neurons are very unlikely to be the substrate for any cognitive or informative process, because these states will critically depend on uncontrollable microscopic fluctuations. Explicit mechanisms must be implemented by the network to adapt its parameters, through learning for instance, in order to avoid chaos, which otherwise would be dominant. A possible alternative would be for the neural network to be capable of attributing a cognitive significance to the shapes formed by the attractors as shown in Figs 5 and 6. Further investigation of the dynamic properties of neural networks need be developed to understand if information processing through chaotic attractors can occur. In this work we have been concerned with the analysis of the dynamic properties of

Neural networks with chaotic dynamics 139 neural networks, with continuous positive and bounded activities, and no restrictive conditions on synaptic weights. It appears here clearly that these systems are endowed with very rich dynamic properties. A further problem which is naturally raised is that of the synthesis of a neural network with specified dynamic properties. For instance, how to choose or modify the synaptic weights in order to obtain, for a neuron activity, an oscillatory regime with given repetition rate and amplitude, or a chaotic regime with given statistical signal characteristics. To solve these questions requires the development of theoretical approaches to describe the behavior of networks of interacting nonlinear elements and that are devoid of an energy function. Beside the progress this would bring to the understanding of neural networks and complex systems, interesting applications could result, in system theory, control, signal and information processing. REFERENCES 1. J. J. Hopfield, Neural networks and physical systems with emergent collective computational abilities, Proc. Natn. Acad. Sci. USA 79, 2554-2558 (1982). 2. D. J. Amit, Modelinn Brain Function- The World of Attractor Neural Networks. Cambridge University Press, Cambridge (1989). _ 3. D. Stariolo. Dvnamics of neural networks with continuous variables. Phvs. Len. A152. 349-352 (1991). 4. A. V. Holden: W. Winlow and P. G. Haydon, The induction or periodic and chaotic activity in a molluscan neurone, Biol. Cybern. 43, 169-173 (1982). 5. A. Babloyantz and J. M. Salazar, Evidence of chaotic dynamics of brain activity during the sleep cycle, Phys. Lett. Alll, 152-156 (1985). 6. A. Babloyantz and A. Destexhe, Low-dimensional chaos in an instance of epilepsy, Proc. Natn. Acad. Sci. USA 83, 3513-3517 (1986). 7. K. Aihara, T. Takabe and M. Toyoda, Chaotic neural networks, Phys. Lett. A144, 333-340 (1990). 8. M. Bauer and W. Martienssen, Quasi-periodicity route to chaos in neural networks, Europhys. Lett. 10, 427-431 (1989). 9. H. L. J. van der Maas. P. F. M. J. Verschure and P. C. M. Molenaar, A note on chaotic behavior in simple neural networks, Neural Networks 3, 119-122 (1990). 10. M. Y. Choi and B. A. Huberman, Dynamic behavior of nonlinear networks, Phys. Rev. A28, 1204-1206 (1983). 11. R. M. May, Simple mathematical models with very complicated dynamics, Nature 261, 459-467 (1976). 12. F. Chapeau-Blondeau and G. Chauvet, Stable, -oscillatory, and chaotic regimes in the dynamics of small neural networks with delay, Neural Networks 5, 735-743 (1992). 13. J. J. Hopfield, Neurons with graded response have collective computational properties like those of two-state neurons, Proc. Natn. Acad. Sci. USA 81, 3088-3092 (1984). 14. P. Berge, T. Pomeau and C. Vidal, Order Within Chaos: Towards a Deterministic Approach to Turbulence. Wiley, New York (1986). 15. U. Riedel. R. Kuhn and J. L. van Hemmen, Temporal sequences and chaos in neural nets, Phys. Rev. A38, 1105-1108 (1988). 16. C. A. Skarda and W. J. Freeman, How brains make chaos in order to make sense of the world, Behavioral Brain Sci. 10, 161-195 (1987).