STAR CELLULAR NEURAL NETWORKS FOR ASSOCIATIVE AND DYNAMIC MEMORIES

Similar documents
DESIGNING CNN GENES. Received January 23, 2003; Revised April 2, 2003

STUDY OF SYNCHRONIZED MOTIONS IN A ONE-DIMENSIONAL ARRAY OF COUPLED CHAOTIC CIRCUITS

Iterative Autoassociative Net: Bidirectional Associative Memory

Experimental Characterization of Nonlinear Dynamics from Chua s Circuit

Artificial Intelligence Hopfield Networks

7 Rate-Based Recurrent Networks of Threshold Neurons: Basis for Associative Memory

DESYNCHRONIZATION TRANSITIONS IN RINGS OF COUPLED CHAOTIC OSCILLATORS

CHAPTER 3. Pattern Association. Neural Networks

A New Circuit for Generating Chaos and Complexity: Analysis of the Beats Phenomenon

K. Pyragas* Semiconductor Physics Institute, LT-2600 Vilnius, Lithuania Received 19 March 1998

7 Recurrent Networks of Threshold (Binary) Neurons: Basis for Associative Memory

A SYSTEMATIC PROCEDURE FOR SYNCHRONIZING HYPERCHAOS VIA OBSERVER DESIGN

PULSE-COUPLED networks (PCNs) of integrate-and-fire

Anti-synchronization of a new hyperchaotic system via small-gain theorem

Example Chaotic Maps (that you can analyze)

Majid Sodagar, 1 Patrick Chang, 1 Edward Coyler, 1 and John Parke 1 School of Physics, Georgia Institute of Technology, Atlanta, Georgia 30332, USA

ADAPTIVE SYNCHRONIZATION FOR RÖSSLER AND CHUA S CIRCUIT SYSTEMS

In biological terms, memory refers to the ability of neural systems to store activity patterns and later recall them when required.

Study on Proportional Synchronization of Hyperchaotic Circuit System

Generalized projective synchronization of a class of chaotic (hyperchaotic) systems with uncertain parameters

Handout 2: Invariant Sets and Stability

A New Dynamic Phenomenon in Nonlinear Circuits: State-Space Analysis of Chaotic Beats

A. The Hopfield Network. III. Recurrent Neural Networks. Typical Artificial Neuron. Typical Artificial Neuron. Hopfield Network.

Proceedings of Neural, Parallel, and Scientific Computations 4 (2010) xx-xx PHASE OSCILLATOR NETWORK WITH PIECEWISE-LINEAR DYNAMICS

Mathematical analysis of a third-order memristor-based Chua oscillators

Research Article Diffusive Synchronization of Hyperchaotic Lorenz Systems

SYNCHRONIZING CHAOTIC ATTRACTORS OF CHUA S CANONICAL CIRCUIT. THE CASE OF UNCERTAINTY IN CHAOS SYNCHRONIZATION

A SYSTEMATIC APPROACH TO GENERATING n-scroll ATTRACTORS

Synchronization Phenomena of Impulsively-Coupled

Computers and Mathematics with Applications. Adaptive anti-synchronization of chaotic systems with fully unknown parameters

An Introductory Course in Computational Neuroscience

Neural Networks. Associative memory 12/30/2015. Associative memories. Associative memories

Experimental and numerical realization of higher order autonomous Van der Pol-Duffing oscillator

Generalized projective synchronization between two chaotic gyros with nonlinear damping

Let s now begin to formalize our analysis of sequential machines Powerful methods for designing machines for System control Pattern recognition Etc.

Entrainment Alex Bowie April 7, 2004

SYNCHRONIZATION OF HYPERCHAOTIC CIRCUITS VIA CONTINUOUS FEEDBACK CONTROL WITH APPLICATION TO SECURE COMMUNICATIONS

Stochastic Model for Adaptation Using Basin Hopping Dynamics

Tracking the State of the Hindmarsh-Rose Neuron by Using the Coullet Chaotic System Based on a Single Input

RICH VARIETY OF BIFURCATIONS AND CHAOS IN A VARIANT OF MURALI LAKSHMANAN CHUA CIRCUIT

IDENTIFICATION OF TARGET IMAGE REGIONS BASED ON BIFURCATIONS OF A FIXED POINT IN A DISCRETE-TIME OSCILLATOR NETWORK

Computational Intelligence Lecture 6: Associative Memory

CHUA S ATOM. Received March 16, 1999; Revised April 20, 1999

Artificial Neural Network and Fuzzy Logic

698 Zou Yan-Li et al Vol. 14 and L 2, respectively, V 0 is the forward voltage drop across the diode, and H(u) is the Heaviside function 8 < 0 u < 0;

The Mixed States of Associative Memories Realize Unimodal Distribution of Dominance Durations in Multistable Perception

A. The Hopfield Network. III. Recurrent Neural Networks. Typical Artificial Neuron. Typical Artificial Neuron. Hopfield Network.

On the synchronization of a class of electronic circuits that exhibit chaos

Master-Slave Synchronization using. Dynamic Output Feedback. Kardinaal Mercierlaan 94, B-3001 Leuven (Heverlee), Belgium

Backstepping synchronization of uncertain chaotic systems by a single driving variable

Impulsive Stabilization for Control and Synchronization of Chaotic Systems: Theory and Application to Secure Communication

Chapter 4 Integration

CSC321 Lecture 2: Linear Regression

Fault Tolerance Technique in Huffman Coding applies to Baseline JPEG

Hopfield Neural Network

THE SYNCHRONIZATION OF TWO CHAOTIC MODELS OF CHEMICAL REACTIONS

CONTROLLING CHAOTIC DYNAMICS USING BACKSTEPPING DESIGN WITH APPLICATION TO THE LORENZ SYSTEM AND CHUA S CIRCUIT

Show that the following problems are NP-complete

Synchronization of a General Delayed Complex Dynamical Network via Adaptive Feedback

Chapter 2 Simplicity in the Universe of Cellular Automata

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition

Degree Master of Science in Mathematical Modelling and Scientific Computing Mathematical Methods I Thursday, 12th January 2012, 9:30 a.m.- 11:30 a.m.

Anatomy of a Bit. Reading for this lecture: CMR articles Yeung and Anatomy.

Memristor Based Chaotic Circuits

Using a Hopfield Network: A Nuts and Bolts Approach

Introduction. Previous work has shown that AER can also be used to construct largescale networks with arbitrary, configurable synaptic connectivity.

Synchronization-based parameter estimation from time series

2009 Spring CS211 Digital Systems & Lab CHAPTER 2: INTRODUCTION TO LOGIC CIRCUITS

Neural Nets and Symbolic Reasoning Hopfield Networks

Secure Communication Using H Chaotic Synchronization and International Data Encryption Algorithm

Controlling a Novel Chaotic Attractor using Linear Feedback

( ) T. Reading. Lecture 22. Definition of Covariance. Imprinting Multiple Patterns. Characteristics of Hopfield Memory

Neural Networks Lecture 6: Associative Memory II

SELF-ORGANIZATION IN NONRECURRENT COMPLEX SYSTEMS

ADAPTIVE CONTROLLER DESIGN FOR THE ANTI-SYNCHRONIZATION OF HYPERCHAOTIC YANG AND HYPERCHAOTIC PANG SYSTEMS

Neural Networks for Machine Learning. Lecture 11a Hopfield Nets

Maps and differential equations

3. Coding theory 3.1. Basic concepts

Function Projective Synchronization of Discrete-Time Chaotic and Hyperchaotic Systems Using Backstepping Method

Modeling and Stability Analysis of a Communication Network System

LYAPUNOV EXPONENTS AND STABILITY FOR THE STOCHASTIC DUFFING-VAN DER POL OSCILLATOR

SYNCHRONIZATION IN SMALL-WORLD DYNAMICAL NETWORKS

Coordinate systems and vectors in three spatial dimensions

MAS331: Metric Spaces Problems on Chapter 1

AP Calculus Testbank (Chapter 9) (Mr. Surowski)

PHY411 Lecture notes Part 5

1 Review of di erential calculus

Low Emittance Machines

A Highly Chaotic Attractor for a Dual-Channel Single-Attractor, Private Communication System

Some Background Material

Time-delay feedback control in a delayed dynamical chaos system and its applications

Synchronizing Chaotic Systems Based on Tridiagonal Structure

A Novel Chaotic Neural Network Architecture

Experimental Characterization of Nonlinear Dynamics from Chua s Circuit

Christian Mohr

Antimonotonicity in Chua s Canonical Circuit with a Smooth Nonlinearity

RENORMALIZATION OF DYSON S VECTOR-VALUED HIERARCHICAL MODEL AT LOW TEMPERATURES

On the FPGA-based Implementation of the Synchronization of Chaotic Oscillators in Master-Slave Topology?

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Implementing Memristor Based Chaotic Circuits

Transcription:

International Journal of Bifurcation and Chaos, Vol. 4, No. 5 (2004) 725 772 c World Scientific Publishing Company STAR CELLULAR NEURAL NETWORKS FOR ASSOCIATIVE AND DYNAMIC MEMORIES MAKOTO ITOH Department of Information and Communication Engineering, Fukuoka Institute of Technology, Fukuoka 8-0295, Japan LEON O. CHUA Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, CA 94720, USA Received October 5, 2003; Revised January 5, 2004 In this paper, we propose a Star cellular neural network (Star CNN) for associative and dynamic memories. A Star CNN consists of local oscillators and a central system. All oscillators are connected to a central system in the shape of a Star, and communicate with each other through a central system. A Star CNN can store and retrieve given patterns in the form of synchronized chaotic states with appropriate phase relations between the oscillators (associative memories). Furthermore, the output pattern can occasionally travel around the stored patterns, their reverse patterns, and new relevant patterns which are called spurious patterns (dynamic memories). Keywords: Star network; associative memory; dynamic memory; spurious pattern; CNN, Star CNN; dreams.. Introduction There are four principal network topologies (the physical shape or the layout of networks) in Local Area Networks, namely, bus topology, mesh topology, star topology, and ring topology, as shown in Figs. (a) (d), respectively. The network topology plays a fundamental role in its functionality and performance [Bergeron, 2002]. These topologies can be applied to neural networks: bus topology All cells are connected to a central backbone, called the bus. The bus permits all cells to receive every transmission. It is relatively simple to control traffic flow among cells. The main drawback stems from the fact that only one communication channel exists to service all cells on the network. mesh topology Each cell is connected to every other cell by a separate wire. This configuration provides redundant paths through the network, so if one cell blows up, we do not lose the network. star topology All cells are connected to a central cell. All traffic emanates from the central cell. The advantage of the star topology is that if one cell fails, then only the failed cell is unable to send or receive data. The star networks are relatively easy to install, but have potential bottlenecks and failure problems at the central cell because all data must pass through this central cell. ring topology All cells are connected to one another in the shape of a closed loop, so that each cell is connected directly to two other cells. In most cases, data flows in one direction only, with one cell receiving the signal and relaying it to the next cell on the ring. If a channel between two cells fails, then the entire network is lost. 725

726 M. Itoh & L. O. Chua A B C A B D (a) E D (b) C A A F B F B G E C E C D D (c) (d) Fig.. Network topology. (a) Bus topology, (b) mesh topology, (c) star topology, (d) ring topology. Note that the cells are not necessarily identical. Furthermore, these topologies can also be mixed. For example, a tree topology combines characteristics of bus and star topologies. It consists of groups of star-configured cells connected to a bus. Tree topologies allow for the expansion of an existing network. Recently, a new architecture for neural network was introduced by [Hoppenstea & Izhikevich, 999]. They proposed a bus-topology-based neural network called a neurocomputer, which consists of oscillators having different frequencies and that are connected weakly via a common media (bus) forced by an external input (see Fig. 2). The neurocomputer can store and retrieve given patterns in the form of synchronized states with appropriate phase relations between the oscillators. The oscillatory neurocomputer needs only N connections between each oscillator and a common media (the symbol N indicates the number of cells). In this paper, we propose a new architecture called Star Cellular Neural Network (Star CNN) for associative memories, which is based on the star topology and chaotic synchronization. A Star CNN is a new dynamic nonlinear system defined by connecting N identical dynamical systems called local cells with a central system in the shape of a star, as shown in Fig. 2. All local cells communicate with each other through a central system. Thus, a Star CNN has N connections from the N

Star Cellular Neural Networks for Associative and Dynamic Memories 727 A B C D External Input (a) A F B Central system E C D (b) Fig. 2. Neural networks. (a) Neurocomputer, (b) star cellular network. local cells to a central system. The Star CNN has the same bottleneck as that of the star topology. However, the Star CNN can be easily implemented in hardware using only N connections, except that a central cell has to supply complicated signals. A Star CNN can store and retrieve complex oscillatory patterns in the forms of synchronized chaotic states (associative memories). Furthermore, the Star CNN can function as dynamic memories, namely its output pattern can occasionally travel around the stored patterns, their reverse patterns, and new relevant patterns (spurious patterns) [Christos, 2003; Adachi & Aihara, 997]. It is motivated by the observation that a human being s associative memory

728 M. Itoh & L. O. Chua is not always static, but sometime wanders from a certain memory to another memory, one after another. Furthermore, a flash of inspiration (new pattern) sometimes appears which is relevant to known memories. Change of memory also sometimes seems to be chaotic. Our proposed Star CNN can functions as both associative (static) and dynamic memories. 2. Associative Memories An information storage device is called an associative memory if it permits the recall of information on the basis of a partial knowledge of its content, but without knowing its storage location. The dynamics of associative memories is described as follows: Consider M binary ( /) patterns σ, σ 2,..., σ M, where each pattern σ i contains N bits of information σi m; namely σ = σ σ 2. σ N, σ2 = σ 2 σ 2 2. σ 2 N,..., σm = σ M σ M 2. σ M N. () The dynamics of our associative memory is defined as follows: () Assign the coupling coefficients: s ij = N M m= σ m i σ m j. (2) (2) Assign the initial state v i (0). (3) Update the state of the network: v i (n + ) = sgn s ij v j (n). (3) where sgn(x) = { if x 0, if x < 0. The above associative memory will converge to the desired pattern if M N [Müller & Reinhar, 990]. To make this paper somewhat self-contained, we will prove that the patterns σ k (k =, 2,..., N) are the fixed points of Eq. (3) (for more details, see [Müller & Reinhar, 990]). Substituting σj k into v j (n) in Eq. (3), we obtain v i (n + ) = sgn s ij v j (n) = sgn s ij σj k [ = sgn N = sgn N + M σ k i m=, m k σ m i = sgn σ i k + N M m= σ m i σ m j σj k σj k M σ m j σk j m=, m k σ m i ] σ k j σ m j σk j. If the M patterns are uncorrelated, then the second term M σi m σj m σj k, (5) N m=, m k is estimated to be of size (M )/N [Müller & Reinhar, 990]. Therefore, it will most likely not affect the sign of σi k as long as M N, i.e. the number of stored patterns is much smaller than the number of bits. Then, we have v i (n + ) = sgn s ij σj k = σi k, (6) which implies that σ k (k =, 2,..., N) are the fixed points of Eq. (3). If the stored patterns are orthogonal to each other, we obtain N i= (4) σ m i σ m i = δ mm. (7) where δ mm = if m = m and δ mm = 0 if m m. In this case, Eq. (5) vanishes, and so the patterns σ k (k =, 2,..., N) also become the fixed points of Eq. (3). Furthermore, if the first p cells start out

Star Cellular Neural Networks for Associative and Dynamic Memories 729 in the wrong state, then we get the relation ( [ v i (n + ) = sgn p ] σi k N + N M m=, m k σ m i σj m σj k. (8) Under the condition n, p N, we still have v i (n + ) = sgn s ij σj k = σi k. (9) That is, the cell will converge to the desired pattern within a single update. 3. Star CNN for Associative Memories In this section, we design our Star CNN for associative memories. Let us define first the dynamics of an nth-order cell by dx i dx 2 i. dx n i = f i (x i, x2 i,..., xn i ), = fi 2(x i, x2 i,..., xn i ),. = fi n(x i, x2 i,..., xn i ), (i =, 2,..., N) (0) where the subscript i denotes the ith cell and fi, f i 2,..., f i n are functions of (x i, x2 i,..., xn i ). The dynamics of an nth-order Star CNN is defined by dx i = f i (x i, x2 i,..., xn i )+u i, dx 2 i = f 2 i (x i, x2 i,..., xn i ),. dx n i = f n i (x i, x2 i,..., xn i ),. (i=, 2,..., N) () where u i denotes the input of the ith cell. The central system receives the signal x i from the ith cell and supplies the signal u i to each cell. 3.. First-order Star CNNs The dynamics of a first-order Star CNN is governed by a system of n differential equations = x i + u i, (i =, 2,..., N) (2) where x i and u i denote the state and the input of the ith cell, respectively. The output y i and the state x i of each cell are related through the standard piecewise-linear saturation function [Chua & Roska, 2002]: y i = h(x i ) = 2 ( x i + x i ). (3) The central system receives these output y i = h(x i ) and supplies the following signal to each cell u i = a ij y j = a ij h(x j ). (4) Substituting u i into Eq. (2), we obtain the dynamics of a fully-connected CNN without input and threshold parameters [Chua, 998]: = x i + a ij y j = x i + a ij h(x j ). (5) Thus, the dynamics of the systems (2) and (5) are equivalent. Since a fully-connected CNN has a mesh topology and self loops, it needs N(N + )/2 connections, whereas a Star CNN needs only N connections (see Fig. 3). Next, we show that the Star CNN (2) can function as associative memories, if we modify the output y i and the signal u i as follows y i = h(x i ) = sgn(x i ), u i = sgn s ij y j = sgn s ij sgn(x j ). (6) Substituting Eq. (6) into Eq. (2), we obtain = x i + sgn s ij sgn(x j ). (7) If the initial condition satisfies x j (0) = σ k j, (j =, 2,..., N) (8)

730 M. Itoh & L. O. Chua Local cell Self loop (a) Central system Master cell Local cell (b) Fig. 3. (a) Fully-connected CNN with 25 cells. (b) Star CNN with 26 cells.

Star Cellular Neural Networks for Associative and Dynamic Memories 73 then we have = x i + sgn s ij sgn(σj k ) = x i + sgn s ij σj k [ = x i + sgn N = x i + sgn N + M m=, m k σ m i σ k i = x i +sgn σ i k + N M m= σ m j σk j σ m i σm j σ k j σk j M m=, m k σ m i ] σ k j σ m j σk j. (9) If the patterns are uncorrelated, the second term M σi m σj m N σk j, (20) m=, m k is estimated to be of size (M )/N [Müller & Reinhar, 990]. Therefore, it will not affect the sign of σi k as long as M N. Thus, σi k becomes an equilibrium point, and y i = sgn(σi k) = σk i at this point. Furthermore, if the first p cells start out in the wrong initial state, then we get = x i + sgn + M N ([ p N m=, m k σ m i ] σi k σj m σj k. (2) Under the condition M, p N, y j (t) = sgn(x j (t)) converges to σi k as t increases. Thus, the Star CNN (7) has the functions of associative memories. Some computer simulations of the above associative memories are given in Figs. 4 and 5. The four stored patterns are shown in Figs. 4(a) 4(d). Observe that since ( σi m)( σm j ) = σm i σm j in Eq. (2), it follows that the coupling coefficient s ij defined by Eq. (2) is the same for each pattern σ i and its complement σ i. In other words, if σ i is a fixed point of Eq. (7), then the complementary pattern σ i is also a fixed point even though it is not included in Eq. (2) as a separate pattern. It follows that the four complementary patterns shown in Figs. 4(e) 4(h) are also stored as associate memory in Eq. (7). (a) (b) (c) (d) (e) (f) (g) (h) Fig. 4. Stored patterns for Star CNN associative memories. (a) (d) Stored patterns, (e) (h) stored reverse patterns.

732 M. Itoh & L. O. Chua (a) (b) (c) (d) (i) (e) (f) (g) (h) (j) (k) Fig. 5. Figure Retrieved 5: patterns. Retrieved The patterns. first, third The andfirst, fifththird, rows indicate and fiftherows initialindicate patterns, the andinitial the second, patterns, fourth and anhe sixth rows indicate the recognized patterns. second, fourth, and sixth rows indicate the recognized patterns.

Star Cellular Neural Networks for Associative and Dynamic Memories 733 The outputs of the blue and red cells represent the binary values and, respectively. The coupling coefficients s ij for the stored patterns are given in Appendix A. Observe that the Star CNN (7) can indeed retrieve the patterns. Observe also that some additional patterns [(i) (k)] which differ from the original set of stored patterns are retrieved as well in Fig. 5. The patterns (i) and (j) are called (a) (b) (g) (i) (e) (f) (c) (j) Fig. 6. Two spurious patterns generated by the Star CNN (7). The spurious patterns (i) and (j) are obtained by applying a majority rule to the stored patterns {(a), (b), (g)} and {(e), (f), (c)}, respectively.

734 M. Itoh & L. O. Chua spurious patterns, which are derived from the stored patterns [Christos, 2003]. That is, the spurious patterns (i) and (j) are obtained by applying a majority rule to the stored patterns {(a), (b), (g)} and {(e), (f), (c)}, respectively (see Fig. 6). The number of spurious patterns grows as we store more patterns [Christos, 2003]. If an inappropriate initial pattern is given, then the Star CNN (7) cannot pinpoint a particular pattern, and gives an output pattern (k) which bears no relationship to the stored patterns. In this paper, a forward Euler algorithm with a time step size t = 0.0 is applied to all the computer simulations. The output of each cell is given by y j (x) = sgn(x j ), (22) and the 25 Star CNN cells are arranged as follows: 2 3 4 5 6 7 8 9 0 2 3 4 5 6 7 8 9 20 2 22 23 24 25 The symbol i indicates the ith cell. 3.2. Second-order Star CNNs (23) The dynamics of a second-order cell (oscillator) is written as = f(x i, y i ), (i =, 2,..., N) (24) = g(x i, y i ), where f and g are functions of x i and y i. Suppose now that the following system = f(x i, y i ) + d(x 0 x i ), (i =, 2,..., N) = g(x i, y i ), dynamics (25) = f(x i,y i )+u i, = g(x i,y i ), will synchronize when the coupling coefficient d (> 0) is sufficiently large and the signal x 0 is obtained from the master cell dx 0 = f(x 0, y 0 ), (26) dy 0 = g(x 0, y 0 ). It is known that the system (25) will synchronize if the Lyapunov exponents of the following difference system are all negative d [ ] x = y f(x 0, y 0 ) x 0 d f(x 0, y 0 ) y 0 g(x 0, y 0 ) g(x 0, y 0 ) x 0 y 0 [ ] x y (27) where x = x i x 0, y = y i y 0 (for more details, see [Madan, 993]). Furthermore, we suppose that each cell has a stable limit cycle which is symmetric with respect to the origin; namely, if (ϕ i (t), ψ i (t)) is a limit cycle solution, then ( ϕ i (t), ψ i (t)) is also a solution. Thus, we can also use the signal x 0 (t) as a driving signal, and the following system will also synchronize in anti-phase = f(x i, y i )+d( x 0 x i ), = g(x i, y i ), (i=, 2,..., N) (28) The above symmetric condition is satisfied if f and g satisfy f(x } i, y i ) = f( x i, y i ), (29) g(x i, y i ) = g( x i, y i ). The dynamics of a second-order Star CNN for associative memories is defined by (i=, 2,, N) (30) where u i denotes the input of the ith cell. The central system receives the signal x i from the ith cell and

Star Cellular Neural Networks for Associative and Dynamic Memories 735 supplies the following signal to each cell signal u i = d(m i x 0 x i ), (i =, 2,, N) (3) where the parameter d > 0 is assumed to be sufficiently large, and m i = sgn s ij w j, (32) where w j = sgn(x 0 x j ). (33) The signal x 0 is obtained from the master cell (26) and the function w j gives the phase relation between the master cell and the jth cell, that is, { if x0 x j 0 w j = (34) if x 0 x j < 0 [ m i = sgn a ij w j = sgn a ij σj = sgn N = sgn N M m= σ m i σm j σ j If we set w j = sgn(y 0 y j ), then we can also determine the phase relation using y 0 and y j. If the second-order Star CNN (for associative memories) has N local cells and one master cell, the Star CNN needs N + connections, while the fully-connected CNN with N cells has N(N + )/2 connections. Therefore, if the number of local cells are equal to 25, namely, N = 25, the fully-connected CNN has 325 connections, whereas the Star CNN has only 26 connections. Observe that the Star CNN is much simpler compared to a fully-connected CNN, as shown in Fig. 3. Next, we study the phase relation between the local and the master cells. In the period satisfying w j = sgn(x 0 (t)x j (t)) = σj, we have = sgn σi + N M m σ m i M m= σ m i σ m j σ m j σ j ] σ j (35) If the stored patterns are uncorrelated, then the term M σi m σj m N σ j (36) m will be estimated to be of size (M )/N [Müller & Reinhar, 990]. If the number of stored patterns is much smaller than the total number of cells (M N), this term becomes sufficiently small compared with σi. Thus, we have m i = σi, u i = d(σi x (37) 0 x i ). If σ i = (resp. σ i = ), then the driving signal is written as u i = d(x 0 x i ), (resp. u i = d( x 0 x i )) (38) and therefore the ith cell tends to synchronize with the master cell in phase (resp. in anti-phase). Note that the attractor corresponding to the stored pattern σ has ith and jth cell synchronized in phase when σ i = σ j, and synchronized in anti-phase when σ i = σ j. Since σ and σ define the same pattern of phase relation [Hoppenstea & Izhikevich, 999], the Star CNN with the output function sgn(x i (t)) oscillates between σ and σ periodically. Thus, the Star CNNs can store patterns in the form of synchronized states with appropriate phase relations between the oscillators.

736 M. Itoh & L. O. Chua Observe that if we substitute Eqs. (3) (33) into Eq. (30), we would obtain the following fully-connected CNN: = f(x i, y i ) + d sgn s ij sgn(x 0 x j ) x 0 x i, = g(x i, y i ). (i =, 2,..., N) (39) Here, the signal x 0 is obtained from the following master cell dx 0 = f(x 0, y 0 ), (40) dy 0 = g(x 0, y 0 ). Let us consider next a simplified version of the second-order Star CNN (30). In this scheme, the system can be built with fewer circuit elements. The dynamics of the simplified Star CNN is governed by a first-order differential equation dynamics = g(u i, y i ), (i =, 2,, N) (4) where the input u i is given by signal u i = m i x 0, (i =, 2,, N). (42) The signal x 0 is obtained from the master cell and m i is obtained from m i = sgn s ij w j, (43) where w j = sgn(y 0 y j ). (44) The phase relation is obtained from y i, and the driving signal x 0 is obtained from the master cell (26). The local cells will synchronize, if the Lyapunov exponent of the following difference system is negative d y = g(x 0, y 0 ) y i y, (i =, 2,..., N) (45) where y = y i y 0. For more details, see [Pecora & Carroll, 990]. In this system, all cells are firstorder cells except the master cell. The dynamics of the master cell is given by the second-order system dx 0 = f(x 0, y 0 ), (46) dy 0 = g(x 0, y 0 ). Another interesting neural network for associative memories consists of oscillators having different frequencies, and connected weakly via a common media (bus) forced by an external input [Hoppenstea & Izhikevich, 999]. The dynamics of this neural network is not defined on the usual state space, but on the phase space. The relationship between our second-order Star CNN and the above neurocomputer is described in Appendix B. We now present two examples of second-order Star CNN cells:

Star Cellular Neural Networks for Associative and Dynamic Memories 737 () Two-cell CNNs [Zhou & Nosseck, 99] = x i + ph(x i ) + rh(y i ), = y i + sh(x i ) + qh(y i ), where p, q, r, and s are constants and h(x i ) = 2 ( x i + x i ), (47) (for example, p =., q =., r = 2, s = 2). (2) Piecewise-linear Van der Pol oscillator [Itoh & Chua, 2003] = α(y i x i + 2h(x i )), = βx i, where α and β are constants and h(x i ) = 2 ( x i + x i ), (for example, α = 2, β = 2). 3.3. Third-order Star CNNs (48) The dynamics of a chaotic third-order Star CNN cell is defined by = f(x i, y i, z i ), = g(x i, y i, z i ), (49) dz i = h(x i, y i, z i ), where f, g and h are functions of x i, y i and z i. We assume that each cell has a chaotic attractor which is symmetric with respect to the origin, and the following system = f(x i, y i, z i )+d(x 0 x i ), = g(x i, y i, z i ), dz i = h(x i, y i, z i ), (i=, 2,..., N) (50) will synchronize when the coupling coefficient d is assumed to be sufficiently large, and the driving signal x 0 is obtained from the master chaotic cell dx 0 = f(x 0, y 0, z 0 ), dy 0 = g(x 0, y 0, z 0 ), (5) dz 0 = h(x 0, y 0, z 0 ). The system (50) will synchronize if the Lyapunov exponents of the following difference system are all negative x d y z f(x 0, y 0, z 0 ) d f(x 0, y 0, z 0 ) f(x 0, y 0, z 0 ) x 0 y 0 z 0 = g(x 0, y 0, z 0 ) g(x 0, y 0, z 0 ) g(x 0, y 0, z 0 ) x 0 y 0 z 0 h(x 0, y 0, z 0 ) h(x 0, y 0, z 0 ) h(x 0, y 0, z 0 ) x 0 y 0 z 0 x y (52) z where x = x i x 0, y = y i y 0, z = z i z 0 (for more details, see [Madan, 993]). The dynamics of a third-order Star CNN for associative memories is defined by dynamics = f(x i,y i,z i )+u i, = g(x i,y i,z i ), (i=, 2,, N) (53) dz i = h(x i,y i,z i ),

738 M. Itoh & L. O. Chua where u i denotes the input of the ith cell. The central system receives the output x i from the ith cell and supplies the following signal to each cell signal u i = d(m i x 0 x i ), (i =, 2,, N) (54) where the parameter d is assumed to be positive and sufficiently large and m i = sgn s ij w j, (55) where w j = sgn(x 0 x j ). (56) The signal x 0 is obtained form the master cell (5) and w j gives the phase relation between the master and the local cells. Observe that if we substitute Eqs. (54) (56) into Eq. (53), we would obtain the following fully-connected CNN: = f(x i, y i, z i ) + d sgn s ij sgn(x 0 x j ) x 0 x i, = g(x i, y i, z i ), (i =, 2,..., N) (57) dz i = h(x i, y i, z i ). Here, the signal x 0 is obtained from the following master cell dx 0 = f(x 0, y 0, z 0 ), dy 0 = g(x 0, y 0, z 0 ), (58) dz 0 = h(x 0, y 0, z 0 ). A simplified version of the third-order Star CNN (5) is described by the second-order differential equation: dynamics dz i = g(u i, y i, z i ), = h(u i, y i, z i ), (i =, 2,, N) (59)

Star Cellular Neural Networks for Associative and Dynamic Memories 739 where the input signal u i is given by signal u i = m i x 0, (i =, 2,, N). (60) The signal x 0 is obtained from the master cell and m i is obtained from m i = sgn s ij w j, (6) where w j = sgn(y 0 y j ). (62) In this system, we assumed that the following local cells = g(x 0, y i, z i ), (, 2,..., N) (63) dz i = h(x 0, y i, z i ). driven by x 0 will synchronize. This can be satisfied if the Lyapunov exponents of the following difference system are all negative d [ ] y = z g(x 0, y 0, z 0 ) g(x 0, y 0, z 0 ) y 0 z 0 h(x 0, y 0, z 0 ) h(x 0, y 0, z 0 ) y 0 z 0 [ ] y (64) z where y = y i y 0, z = z i z 0 (for more details, see [Pecora & Carroll, 990]). Finally, we present two examples of chaotic oscillators which can be exploited for associative memories: () Chua s oscillator [Madan, 993] = α(y i x i k(x i )), = x i y i + z i, dz i = βy i γz i, where α, β, and γ are constants and (65) k(x i ) = bx i + 2 (a b)( x i +.0 x i.0 ), (66) (for example, α = 0, β = 0.5, γ = 0.45, a =.22, b = 0.734). (2) Canonical Chua s oscillator [Itoh & Chua, 2003] where α, β, and γ are constants and = α(x i + y i ), = z i x i, dz i = β(y i + k(z i )), (67) k(z i ) = bz i + 2 (ã b)( z i +.0 z i.0 ). (68) (for example, α = 0.3, β = 2, ã = 2, b = 4).

740 M. Itoh & L. O. Chua See [Madan, 993] and [Itoh et al., 994] for the chaotic synchronization of these oscillators. 3.4. Computer simulations of associative memories Some computer simulations of associative memories using Chua s oscillators are given in Fig. 7. The four complementary stored patterns are given in Fig. 4. Observe that the Star CNN (53) can retrieve the patterns as synchronized oscillating patterns. Figure 8 shows the waveforms of the two local cells. Figure 9 shows the synchronization of the local and master cells. Figure 0 shows the relationship between the output waveforms and the sequence of patterns. Additional spurious patterns are also memorized as shown in Fig.. The simplified Star CNN (59) can retrieve not only the stored patterns, but also generate some spurious patterns, as shown in Fig.. We have also applied the Star CNNs with associative memories to other kinds of oscillators, namely, canonical Chua s oscillators, two-cell CNNs, and piecewise-linear Van der Pol oscillators, and obtained similar results. Note that in these examples, we used the function v j (x) = sgn(x j ) as the output for the cells. oscillatory oscillatory oscillatory oscillatory Fig. 7. Retrieved patterns. The first and second (third) rows indicate the initial patterns and the recognized oscillatory patterns, respectively. Since the output function l i (x) = sgn(x i (t)) is used, the Star CNN oscillates between σ j and σ j periodically. The Star CNN consists of 26 Chua s oscillators (25 local cells and master cell).

Star Cellular Neural Networks for Associative and Dynamic Memories 74 4 4 3 3 2 2 x 0 x 0 - - -2-2 -3-3 -4 200 20 220 230 240 250 260 270 280 290 300 t -4 200 20 220 230 240 250 260 270 280 290 300 t 0.8 0.8 0.6 0.6 0.4 0.4 0.2 0.2 y 0 y 0-0.2-0.2-0.4-0.4-0.6-0.6-0.8 200 20 220 230 240 250 260 270 280 290 300 t -0.8 200 20 220 230 240 250 260 270 280 290 300 t 5 5 4 4 3 3 2 2 z 0 z 0 - - -2-2 -3-3 -4-4 -5 200 20 220 230 240 250 260 270 280 290 300 t -5 200 20 220 230 240 250 260 270 280 290 300 t.5.5 0.5 0.5 r 0 r 0-0.5-0.5 - - -.5 200 220 240 260 280 300 t -.5 200 220 240 260 280 300 t Fig. 8. Waveforms of the 0th and 6 25th cells, which are synchronized with the master cell in phase (left). Waveforms of the th, 2th, 3th, 4th, and 5th cells, which are synchronized with the master cell in anti-phase (right). The symbols x, y, z and r denote x j, y j, z j, and the cell output sgn(x j ), respectively.

742 M. Itoh & L. O. Chua 4 3 2 x 0 - -2-3 -4 200 20 220 230 240 250 260 270 280 290 300 t x 0 (t) 4 4 3 3 2 2 x 0 x 0 - - -2-2 -3-3 -4 200 20 220 230 240 250 260 270 280 290 300 t -4 200 20 220 230 240 250 260 270 280 290 300 t x (t) x (t) 4 4 3 3 2 2 x 0 x 0 - - -2-2 -3-3 -4-4 -3-2 - 0 2 3 4 x0-4 -4-3 -2-0 2 3 4 x0 (x 0,x ) (x 0,x ) Fig. 9. Synchronization of x and x. The state variables x and x 0 are synchronized in phase, and x and x 0 are synchronized in anti-phase.

Star Cellular Neural Networks for Associative and Dynamic Memories 743.5.5 0.5 0.5 r 0 r 0-0.5-0.5 - - -.5 200 220 240 260 280 300 t -.5 200 220 240 260 280 300 t (a) Output of the -0th and 6-25th cells. (b) Output of the -5th cells. t=200 t=204 t=20 t=2 t=26 t=224 t=232 t=242 t=249 t=257 Fig. 0. t=265 t=278 t=284 t=29 t=296 Figure 0: Relationship between the output waveforms and the sequence of patterns. The change of patterns is chaotic. Relationship between the output waveforms and the sequence of patterns. The change of patterns is chaotic.

744 M. Itoh & L. O. Chua practical to simulate associative memories for large patterns on a PC. However, under ceratin specialized situations, we can recall an entire image from a much smaller and corrupted subset, as illustrated by the computer simulations shown in Figs. 2 and 3. In this example, only a 3 3 pattern (% of the image) marked by a green square is used for the recognition, and the following palettes are applied to the cell state x i and the cell output v i = sgn(x i ): State x i Color < x i red x i = black < x i < gray scale x i = white x i < blue Output v i Color v i < 0 blue v i 0 red (i) Observe that the three kinds of snails can be identified from only a small partial subset. This example suggests the possibility of the following recognition strategy: oscillatory Partition a given image into several appropriately selected smaller subsets. Choose several partial patterns. Recognize them independently by using Star CNNs. Apply a majority rule if each recognized image is different. (j) Fig.. A recognized pattern which converges to an unstored oscillatory pattern. The first and second (third) rows indicate the initial patterns and the recognized oscillatory patterns, respectively. 3.5. Pattern recognition from a partial pattern of images The number of coupling coefficients s ij increases rapidly as the given image size increases. For example, if the image is given by a 00 00 pattern (namely, 0 000 local cells), then we have to obtain a 0 000 0 000 matrix and store 0 000 0 000 = 00 000 000 = 0 8 coupling coefficients. In the case of a 000 000 pattern, we have to obtain a 000 000 000 000 matrix and store 000 000 000 000 = 0 2 coefficients. Thus, it is not In this procedure, the number of coupling coefficients is relatively small. That is, if a given image is partitioned into K partial subsets, then the number of coupling coefficients reduces from N 2 to LN 2 /K 2. Here, the symbols N and L denote the number of total cells and the number of chosen partial patterns, respectively. In Figs. 2 and 3, we have N = 900, K = 00, L =, and so N 2 = 80 000 and LN 2 /K 2 = 8. Thus, the number of coupling coefficients reduces from 80 000 to only 8 (that is, 0.0%). 4. Dynamic Memories The Star CNN for associative memories usually converges to a stored pattern, or to a spurious pattern if an input is given. However, memory associations in human brains are not always static. It sometimes

Star Cellular Neural Networks for Associative and Dynamic Memories 745 (a) (b) (c) (a) (a) (b) (c) (c) 4 Dynamic Memories (d) (e) (f) (d) (e) (f) (d) (e) (f) Fig. The 2. Figure 2: Relation between global images and partial patterns of images for associative Figure Star Relation 2: CNN Relation between for associative global between images global memories andimages partial usually and patterns partial converges of images patterns for toof associativea images storedfor pattern, memories. associative or (a) (c) tomemories. aglobal spurious images, (d) (f) pattern (a)-(c) partial global images. (d)-(f) partial patterns of images (marked by green squares), which made use (a)-(c) ifpatterns global an input of images (marked by green squares), which made use of associative memories. images. is given. (d)-(f) However, partial patterns memory ofassociations images (marked in human by greenbrains squares), which not always made use static. It sometimes of associative memories. of associative wanders memories. from a certain memory to another memory one after another. Furthermore, wanders a flashfrom of inspiration a certain memory (new pattern) to another sometimes memoryhence one after is called another. a spurious Furthermore, memory. a The flashgeneration of in- Inofthis spurious system, memories the output in human patternbrains of the Star can be CNN appears which is related to known memories, and spiration interpreted (newas pattern) a form of sometimes creative activity appears[christos, which 2003]. can travel around stored patterns and their reverse is related to known memories, and hence is called a patterns, as well as new relevant patterns called In this section, we realize the function of dynamic memories [Adachi & Aihara, 997] by modifying spurious memory. The generation of spurious memories spurious patterns [Christos, 2003]. the in signal human u i brains from the can central be interpreted system. In as this a form system, the output pattern of the Star CNN can travel of creative around stored activity patterns [Christos, and2003]. their reverse patterns, as well as new relevant patterns called spurious 4.. Third-order Star CNNs with patterns In this[christos, section, 2003]. we realize the function of dynamic memories dynamic memories [Adachi & Aihara, 997] by modifying the signal u i from the central system. The dynamics of a third-order Star CNNs with 4. Third-order Star CNNs with dynamic dynamic memories memories is defined by The dynamics of a third-order Star CNNs with dynamic memories is defined by dynamics = f(x i,y i,z i )+u i, = g(x i,y i,z i ), (i=, 2,, N) (69) dz i = h(x i,y i,z i ), where u i denotes the input of the ith cell. The central system receives the output x i from each Star CNN cell and supplies the following signal to the ith cell: signal

746 M. Itoh & L. O. Chua (a) (b) (c) 4 Dynamic Memories The Star CNN for associative memories usually converges to a stored pattern, or to a spurious pattern if an input is given. However, memory associations in human brains are not always static. It sometimes wanders from a certain memory to another memory one after another. Furthermore, a flash of inspiration (new pattern) sometimes appears which is related to known memories, and hence is called a spurious memory. The generation of spurious memories in human brains can be interpreted as a form of creative activity [Christos, 2003]. In this section, we realize the function of dynamic memories [Adachi & Aihara, 997] by modifying the signal u i from the central system. In this system, the output pattern of the Star CNN can travel oscillatory oscillatory oscillatory around stored patterns and their reverse patterns, as well as new relevant patterns called spurious patterns [Christos, 2003]. 4. Third-order Star CNNs with dynamic memories The dynamics of a third-order Star CNNs with dynamic memories is defined by dynamics = f(x i,y i,z i )+u i, = g(x i,y i,z i ), (i=, 2,, N) (69) Fig. 3. Retrieved patterns. The first and second (third) rows indicate the initial patterns and the recognized oscillatory output Figure patterns, 3: respectively. Retrieved In patterns. this dz i example, The Chua s first and oscillators second are (third) used as rows local indicate cells. the initial patterns and the recognized oscillatory output = patterns, h(x i,y i,z respectively. i ), In this example, Chua s oscillators are used as local cells. where where u i denotes u i denotes the the input input of the of the ith ith cell. cell. The The central central system system receives receives the the output output x i xfrom i from each each Star Star CNN cell and supplies the following signal to the ith cell: CNN cell and supplies the following signal to the ith cell: signal u i = d( m i x 0 x i ) (i =, 2,, N) (70) where the parameter d is a positive constant and m i is given by m i = s ij w j, (7) where

Star Cellular Neural Networks for Associative and Dynamic Memories 747 where the parameter d is a positive constant and m i is given by m i = s ij w j, (7) where w j = sgn(x 0 x j ). (72) The signal x 0 is obtained from the master chaotic cell dx 0 = f(x dy 0 0, y 0, z 0 ), = g(x 0, y 0, z 0 ), (73) dz 0 = h(x 0, y 0, z 0 ). Let us compare m i and m i in Eqs. (55) and (7). Observe that whereas m i in Eq. (55) always assumes binary values, namely, m i = sgn s ij w j = ±, (74) there is no restriction on the amplitude of m i in Eq. (7). Let us study next the dynamics of (7). For the time period of t satisfying w j = sgn(x 0 (t)x j (t)) = (j =, 2,..., N), we get the following relation σ j ( m i = s ij w j = s ij σj = =σ i + N M m σ m i N ) M σi m σj m m= σ j σj m σj =σi +ε (75) where ε = N M m σ m i σj m σ j (76) In this case, the second term ε affects the value of m i. If we assume that M N (i.e. the number of stored patterns is much smaller than the total number of Star CNN cells), then we have ε and m i = σ i + ε σ i, u i = d( m i x 0 x i ) = d(σ i x 0 x i ) + dεx 0 (t) d(σ i x 0 x i ). (77) Thus, the driving signal u i may not synchronize the master and slave cells completely, since the term dεx 0 (t) disturbs the synchronization. The output pattern oscillates in the phase relation corresponding to the pattern σ for a while, but does not converge to the pattern σ. Hence, the output pattern may occasionally travel around the stored patterns and their reverse patterns because chaos exhibits a recurrent property, or may travel around new (spurious) patterns which are related to the stored patterns. Furthermore, there is the case where the output pattern converges to an unexpected pattern or oscillates between some of the stored and unstored patterns. Therefore, it is necessary to carry out extensive numerical simulation of the Star CNN (69). Substituting Eqs. (70) (72) into Eq. (69), we obtain the following fully-connected CNN: = f(x i, y i, z i ) + d s ij sgn(x 0 x j ) x 0 x i, = g(x i, y i, z i ), dz i = h(x i, y i, z i ). (i =, 2,..., N) (78) Here, the signal x 0 is obtained from the following master cell dx 0 = f(x 0, y 0, z 0 ), dy 0 = g(x 0, y 0, z 0 ), dz 0 = h(x 0, y 0, z 0 ). (79)

748 M. Itoh & L. O. Chua A simplified version of Eq. (69) may be defined as follows: dynamics dz i = g(u i,y i,z i ), = h(u i,y i,z i ), (i=, 2,, N) (80) where u i is given by signal u i = m i x 0, (i=, 2,, N). (8) The signal x 0 is obtained from the master cell and m i is obtained from where m i = s ij w j, (82) w j = sgn(y 0 y j ). (83) In this case, the driving signal u i is uniquely determined, since u i does not have the parameter d. The dynamics of this system is studied in Appendix C. 4.2. Amplitude of driving signals In this section, we modify the amplitude of m i of Eq. (7). Suppose now that the central system supplies the following signal to the local cells where where u i = d( m i x 0 x i ), (i =, 2,..., N) (84) m i = e s ij w j, (e > ) (85) w j = sgn(x 0 x j ) or sgn(y 0 y j ). (86) Note that the parameter e is added to m i. Then, the attractor of the local cells driven by Eq. (84) is not the same attractor driven by Eq. (70). Furthermore, if we get the phase relation from the equation w j = sgn(y 0 y j ), then the dynamics is not always identical to the ones obtained from the equation w j = sgn(x 0 x j ). However, these two modifications sometimes produce good results for the dynamic memories as shown in the examples of Sec. 4.3 (see also Appendix D). Similarly, we can also apply the following driving signal to Eq. (80) where where u i = m i x 0, (i =, 2,..., N) (87) m i = e s ij w j, (e > ) (88) w j = sgn(y 0 y j ). (89) 4.3. Computer simulations of dynamic memories Several computer simulations of dynamic memories via Chua s oscillators are given in Figs. 4 6. The stored patterns are given in Fig. 4. In this example, we set d = 4, e =. Observe that many new related patterns (spurious patterns) appear between stored patterns and their reverse patterns. The sequence of these patterns is extremely sensitive to initial conditions, and the staying period of spurious patterns is shorter than that of stored patterns. The spurious patterns are usually made up of combinations of stored patterns. Several spurious

Star Cellular Neural Networks for Associative and Dynamic Memories 749 t=0 t= t=3 t=4 t=6 t= t=3 t=4 t=6 t=8 t=9 t=2 t=22 t=23 t=24 t=25 t=26 t=28 t=29 t=30 t=3 t=35 t=37 t=38 t=39 t=40 t=4 t=42 t=43 t=44 Fig. 4. Sequence of output patterns of the Star CNN (69) for t [0, 44]. These patterns are sampled every second by using Figure 4: Sequence of output patterns of the Star CNN (69) for t [0, 44]. These patterns are the output function sgn(x i ) and illustrated only when the pattern changes. The Star CNN consists of 26 Chua s oscillators. sampled every second by using the output function sgn(x i ) and illustrated only when the pattern changes. The Star CNN consists of 26 Chua s oscillators.

750 M. Itoh & L. O. Chua t=45 t=47 t=48 t=50 t=5 t=52 t=54 t=55 t=56 t=57 t=58 t=59 t=60 t=6 t=62 t=63 t=64 t=65 t=66 t=67 t=68 t=69 t=70 t=7 t=72 t=73 t=74 t=75 t=76 t=77 Figure Fig. 5: 5. Sequenceof of output patterns for fort t [45, [45, 77]. 77].

Star Cellular Neural Networks for Associative and Dynamic Memories 75 t=78 t=79 t=80 t=8 t=82 t=83 t=84 t=85 t=86 t=87 t=88 t=89 t=9 t=93 t=94 t=95 t=96 t=97 t=98 t=99 t=00 Figure Fig. 6: 6. Sequence of of output patterns for for t t[78, [78, 00]. 00].

752 M. Itoh & L. O. Chua Fig. 7. A set of basic patterns. patterns and their basic components are illustrated in Figs. 7 20. These results are related to the fact that we generally solve problems and generate new ideas by combining various notions, ideas, or memories in our mind, and sometimes have a sudden flash of inspiration. George Christos suggests that without spurious memories, the brain would not be capable of learning anything new, and furthermore it would become obsessed with its own strongest stored memories. Spurious memories help the brain avoid this problem and learn something new, albeit similar in some respect to what is already stored in the network [Christos, 2003]. Let us investigate next the synchronization behavior of local cells in Fig. 2. The local cells are not synchronized completely, since the output pattern occasionally travels around the stored patterns. Figure 22 shows the computer simulation of the simplified Star CNN (80). The output pattern converges to the stored oscillatory pattern, since g(u i, y i, z i ) and h(u i, y i, z i ) in Eq. (80) are linear functions (see Appendix C). Therefore, this system cannot exhibit dynamic memories. The canonical Chua s oscillators can also be used for creating dynamic memories, as shown in Figs. 23 and 24. In this example, the functions u i, m i, and w i are chosen as follows u i = d( m i x 0 x i ), m i = e s ij w j, w j = sgn(z 0 z j ) = { if z0 z j 0 if z 0 z j < 0 (90) where d = 0., e = 6.25. That is, the phase relation is obtained from z i, but the driving signal is obtained from x i. It is possible to get both the phase relation and the driving signal from x i. A simplified Star CNN (80) based on the canonical Chua s oscillators can also exhibit dynamic memories, since h(u i, y i, z i ) = β(y i + k(z i )) is a nonlinear (piecewise linear) function (see Appendix C). 4.4. Second-order Star CNNs with dynamic memories In this subsection, we investigate the possibility of creating dynamic memories in second-order Star CNNs. Although autonomous second-order oscillators cannot be chaotic, coupled second-order oscillators may produce chaotic attractors. Thus, the following second-order Star CNN may exhibit dynamic memories: dynamics = f(x i,y i )+u i, = g(x i,y i ), (i=, 2,, N) (9)

Star Cellular Neural Networks for Associative and Dynamic Memories 753 or Fig. 8. (Left) Spurious patterns which are made up of combinations of (right) basic patterns. Figure 8: Spurious patterns (left) which are made up of combinations of basic patterns (right).

754 M. Itoh & L. O. Chua Figure 9: Spurious patterns (left) which are made up of combinations of basic patterns (right). Fig. 9. (Left) Spurious patterns which are made up of combinations of (right) basic patterns.

Star Cellular Neural Networks for Associative and Dynamic Memories 755 Fig. 20. (Left) Spurious patterns which are made up of combinations of (right) basic patterns. Figure 20: Spurious patterns (left) which are made up of combinations of basic patterns (right).

756 M. Itoh & L. O. Chua 3 2 x 0 - -2-3 400 450 500 550 600 650 700 t (a) 5 4 3 2 x 0 - -2-3 -4-5 400 450 500 550 600 650 700 t (b) Fig. 2. Synchronization and waveforms of x 9 and x 2. (a) Waveforms x j (t) of the 9th cell (red) and 2th cell (blue) for d = 0. Two cells are desynchronized at t 490, and then synchronized again at t 670. (b) Waveforms x j (t) of the 9th cell (red) and 2th cell (blue) for d = 4. (c) Synchronization of x 9 and x 2 for d = 0. (d) Synchronization of x 9 and x 2 for d = 4.

Star Cellular Neural Networks for Associative and Dynamic Memories 757 3 2 x2 0 - -2-3 -3-2 - 0 2 3 x9 (c) 4 3 2 x2 0 - -2-3 -4-5 -4-3 -2-0 2 3 4 5 x9 (d) Fig. 2. (Continued )

758 M. Itoh & L. O. Chua where u i denotes the input of the ith cell. The central system receives the output x i from the Star CNN cell and supplies the following signal to each cell signal u i = d( m i x 0 x i ), (i =, 2,, N) (92) where where m i = s ij w j, (93) w j = sgn(x 0 x j ) or sgn(y 0 y j ). (94) The signal x 0 is obtained from the master cell dx 0 = f(x 0, y 0 ), dy 0 = g(x 0, y 0 ). Observe that if we substitute Eqs. (92) (94) into Eq. (9), we would obtain a fully-connected CNN: (95) = f(x i, y i ) + d s ij sgn(x 0 x j ) x 0 x i, = g(x i, y i ). (i =, 2,..., N) (96) Here, the signal x 0 is obtained from the following master cell dx 0 = f(x 0, y 0 ), dy 0 = g(x 0, y 0 ). A simplified version of Eq. (9) is defined as follows (97) dynamics = g(u i, y i ), (i = 0,, 2,, N) (98) where the input u i is given by signal u i = m i x 0, (i =, 2,, N) (99)

Star Cellular Neural Networks for Associative and Dynamic Memories 759 However, the simplified Star CNN (98) cannot exhibit dynamic memories because Star CNN cells are one-dimensional systems. Similarly, the following signals (with the parameters d and e): (a) where and u i = d( m i x 0 x i ), (02) m i = e s ij w j, (e > ) w j = sgn(x 0 x j ), (03) u i = m i x 0, oscillatory (b) Fig. 22. Output pattern converges to the stored oscillatory pattern. (a) Initial pattern, (b) converged oscillatory pattern. Here, the signal x 0 is obtained from the master cell and m i is obtained from m i = s ij w j, (00) where w j = sgn(y 0 y j ). (0) where m i = e s ij w j, (e > ) (04) w j = sgn(y 0 y j ), (05) can be applied to Eqs. (9) and (98), respectively. Several computer simulations of dynamic memories via two-cell CNNs are illustrated in Figs. 25 28. The stored patterns are given in Fig. 4. In this example, we set d = 0.2, e = 6.25. Observe that many new related patterns appear between stored patterns and their reverse patterns. In the case of the piecewise-linear Van der Pol oscillators, we could not find appropriate parameter values of d and e for creating dynamic memories. 5. Relationship between Associative and Dynamic Memories In this section, we study the relationship between the dynamics of associative memories and dynamic memories. Consider the following system dynamics = f(x i,y i,z i )+u i, = g(x i,y i,z i ), (i=, 2,, N) (06) dz i = h(x i,y i,z i ),

760 M. Itoh & L. O. Chua t=0 t=2 t=3 t=4 t=2 t=3 t=5 t=6 t=8 t=9 t=20 t=2 t=22 t=24 t=25 t=26 t=27 t=28 t=30 t=3 t=32 t=33 t=37 t=38 t=42 t=52 t=53 t=55 t=57 t=58 Fig. 23. Sequence of output patterns of the Star CNN (69) for t [0, 63]. These patterns are sampled every second by using Figure 23: Sequence of output patterns of the Star CNN (69) for t [0, 63]. These patterns are the output function sgn(x i ) and illustrated only when the pattern changes. The Star CNN consists of 26 canonical Chua s oscillators. sampled every second by using the output function sgn(x i ) and illustrated only when the pattern changes. The Star CNN consists of 26 canonical Chua s oscillators.

Star Cellular Neural Networks for Associative and Dynamic Memories 76 t=64 t=65 t=66 t=67 t=68 t=69 t=70 t=7 t=75 t=76 t=78 t=83 t=84 t=86 t=87 t=89 t=90 t=92 t=95 t=96-00 Fig. 24. Sequence of output patterns for t [64, 00]. Figure 24: Sequence of output patterns for t [64, 00].

762 M. Itoh & L. O. Chua t=0 t= t=3 t=4 t=6 t=7 t=8 t=9 t=0 t= t=2 t=3 t=4 t=5 t=7 t=8 t=9 t=20 t=2 t=22 t=23 t=24 t=25 t=26 t=27 t=28 t=29 t=30 t=3 t=32 Fig. 25. Sequence of output patterns of the Star CNN (9) for t [0, 32]. These patterns are sampled every second by using the output Figurefunction 25: Sequence sgn(x of output patterns of the Star CNN (9) for t [0, 32]. These patterns are i ) and illustrated only when the pattern changes. The Star CNN consists of 26 two-cell CNNs. sampled every second by using the output function sgn(x i ) and illustrated only when the pattern changes. The Star CNN consists of 26 two-cell CNNs.

Star Cellular Neural Networks for Associative and Dynamic Memories 763 t=33 t=34 t=35 t=36 t=37 t=38 t=39 t=40 t=4 t=42 t=43 t=44 t=45 t=46 t=47 t=48 t=49 t=50 t=5 t=52 t=53 t=54 t=55 t=56 t=57 t=58 t=59 t=60 t=6 t=62 Fig. 26. Sequence of output patterns for t [33, 62]. Figure 26: of for t [33, 62].

764 M. Itoh & L. O. Chua t=63 t=64 t=65 t=66 t=67 t=68 t=69 t=70 t=7 t=72 t=73 t=74 t=75 t=76 t=77 t=78 t=79 t=80 t=8 t=82 t=83 t=84 t=85 t=86 t=87 t=88 t=89 t=90 t=9 t=92 Fig. 27. Sequence of output patterns for t [63, 92]. Figure 27: Sequence of for t [63, 92].

Star Cellular Neural Networks for Associative and Dynamic Memories 765 t=93 t=94 t=95 t=96 t=97 t=98 t=99 t=00 Fig. 28. Sequence of output patterns for t [93, 00]. where u i denotes the input of the ith cell. The central system receives the output x i from the local cell and supplies the following signal to each cell signal u i = d(m i x 0 x i ), (i =, 2,, N) (07) where d > 0 and where m i = k e s ij w j, (08) w j = sgn(x 0 x j ). (09) Here, the symbol k( ) denotes a scalar function, w j specifies the phase relation between the master and the local cells, and x 0 is obtained from the master cell dx 0 = f(x 0, y 0, z 0 ), dy 0 = g(x 0, y 0, z 0 ), dz 0 = g(x 0, y 0, z 0 ). The scalar functions k( ) and the parameters d and e in Eqs. (07) and (08) are defined as follows (0) Memory Type k(x) d e m i Associative memory sgn(x) d e = m i = ± Dynamic memory x d < e < m i <

766 M. Itoh & L. O. Chua In the case of associative memories, m i is restricted to the binary number, ±. In contrast, there is no restriction on m i in the case of dynamic memories. Furthermore, the parameter range of d and e is completely different. It is known that the number of spurious patterns arising from static (associative) memories increases with the number of stored patterns [Christos, 2003]. In contrast, a dynamic memory can create many spurious patterns even if we stored only a few patterns. In other words, dynamic memories are more creative than static memories. 6. Communication between Star CNN Cells A chaotic communication system based on Chua s oscillators was proposed [Itoh et al., 994]. This system can be used to communicate between Star CNN cells. In this system, the central system supplies the following driving signal to the Star CNN cells driving signal u i = x 0, (i =, 2,, N) () where the signal x 0 is obtained from the master cell master cell dx 0 dy 0 = f(x 0,y 0,z 0 )+s(t), = g(x 0,y 0,z 0 ), (2) dz 0 = g(x 0,y 0,z 0 ). Here, s(t) indicates the information signal. That is, the information signal is injected to the master cell as a forcing term. The Star CNN cells can recover the information signal s(t) using the following system local cells dz i = g(u i,y i,z i ), = h(u i,y i,z i ), r(t) = du i f(u i,y i,z i ). (i=, 2,, N) (3) An outline of the recovery process is as follows. Suppose that the local cells (3) will synchronize. Then, we have the relation lim y i(t) = y 0 (t), t lim z i(t) = z 0 (t), t (4) and so r(t) = du i f(u i, y i, z i ) dx 0 f(x 0, y 0, z 0 ) = s(t) (for t ) (5)