STAR CELLULAR NEURAL NETWORKS FOR ASSOCIATIVE AND DYNAMIC MEMORIES

Size: px
Start display at page:

Download "STAR CELLULAR NEURAL NETWORKS FOR ASSOCIATIVE AND DYNAMIC MEMORIES"

Transcription

1 International Journal of Bifurcation and Chaos, Vol. 4, No. 5 (2004) c World Scientific Publishing Company STAR CELLULAR NEURAL NETWORKS FOR ASSOCIATIVE AND DYNAMIC MEMORIES MAKOTO ITOH Department of Information and Communication Engineering, Fukuoka Institute of Technology, Fukuoka , Japan LEON O. CHUA Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, CA 94720, USA Received October 5, 2003; Revised January 5, 2004 In this paper, we propose a Star cellular neural network (Star CNN) for associative and dynamic memories. A Star CNN consists of local oscillators and a central system. All oscillators are connected to a central system in the shape of a Star, and communicate with each other through a central system. A Star CNN can store and retrieve given patterns in the form of synchronized chaotic states with appropriate phase relations between the oscillators (associative memories). Furthermore, the output pattern can occasionally travel around the stored patterns, their reverse patterns, and new relevant patterns which are called spurious patterns (dynamic memories). Keywords: Star network; associative memory; dynamic memory; spurious pattern; CNN, Star CNN; dreams.. Introduction There are four principal network topologies (the physical shape or the layout of networks) in Local Area Networks, namely, bus topology, mesh topology, star topology, and ring topology, as shown in Figs. (a) (d), respectively. The network topology plays a fundamental role in its functionality and performance [Bergeron, 2002]. These topologies can be applied to neural networks: bus topology All cells are connected to a central backbone, called the bus. The bus permits all cells to receive every transmission. It is relatively simple to control traffic flow among cells. The main drawback stems from the fact that only one communication channel exists to service all cells on the network. mesh topology Each cell is connected to every other cell by a separate wire. This configuration provides redundant paths through the network, so if one cell blows up, we do not lose the network. star topology All cells are connected to a central cell. All traffic emanates from the central cell. The advantage of the star topology is that if one cell fails, then only the failed cell is unable to send or receive data. The star networks are relatively easy to install, but have potential bottlenecks and failure problems at the central cell because all data must pass through this central cell. ring topology All cells are connected to one another in the shape of a closed loop, so that each cell is connected directly to two other cells. In most cases, data flows in one direction only, with one cell receiving the signal and relaying it to the next cell on the ring. If a channel between two cells fails, then the entire network is lost. 725

2 726 M. Itoh & L. O. Chua A B C A B D (a) E D (b) C A A F B F B G E C E C D D (c) (d) Fig.. Network topology. (a) Bus topology, (b) mesh topology, (c) star topology, (d) ring topology. Note that the cells are not necessarily identical. Furthermore, these topologies can also be mixed. For example, a tree topology combines characteristics of bus and star topologies. It consists of groups of star-configured cells connected to a bus. Tree topologies allow for the expansion of an existing network. Recently, a new architecture for neural network was introduced by [Hoppenstea & Izhikevich, 999]. They proposed a bus-topology-based neural network called a neurocomputer, which consists of oscillators having different frequencies and that are connected weakly via a common media (bus) forced by an external input (see Fig. 2). The neurocomputer can store and retrieve given patterns in the form of synchronized states with appropriate phase relations between the oscillators. The oscillatory neurocomputer needs only N connections between each oscillator and a common media (the symbol N indicates the number of cells). In this paper, we propose a new architecture called Star Cellular Neural Network (Star CNN) for associative memories, which is based on the star topology and chaotic synchronization. A Star CNN is a new dynamic nonlinear system defined by connecting N identical dynamical systems called local cells with a central system in the shape of a star, as shown in Fig. 2. All local cells communicate with each other through a central system. Thus, a Star CNN has N connections from the N

3 Star Cellular Neural Networks for Associative and Dynamic Memories 727 A B C D External Input (a) A F B Central system E C D (b) Fig. 2. Neural networks. (a) Neurocomputer, (b) star cellular network. local cells to a central system. The Star CNN has the same bottleneck as that of the star topology. However, the Star CNN can be easily implemented in hardware using only N connections, except that a central cell has to supply complicated signals. A Star CNN can store and retrieve complex oscillatory patterns in the forms of synchronized chaotic states (associative memories). Furthermore, the Star CNN can function as dynamic memories, namely its output pattern can occasionally travel around the stored patterns, their reverse patterns, and new relevant patterns (spurious patterns) [Christos, 2003; Adachi & Aihara, 997]. It is motivated by the observation that a human being s associative memory

4 728 M. Itoh & L. O. Chua is not always static, but sometime wanders from a certain memory to another memory, one after another. Furthermore, a flash of inspiration (new pattern) sometimes appears which is relevant to known memories. Change of memory also sometimes seems to be chaotic. Our proposed Star CNN can functions as both associative (static) and dynamic memories. 2. Associative Memories An information storage device is called an associative memory if it permits the recall of information on the basis of a partial knowledge of its content, but without knowing its storage location. The dynamics of associative memories is described as follows: Consider M binary ( /) patterns σ, σ 2,..., σ M, where each pattern σ i contains N bits of information σi m; namely σ = σ σ 2. σ N, σ2 = σ 2 σ 2 2. σ 2 N,..., σm = σ M σ M 2. σ M N. () The dynamics of our associative memory is defined as follows: () Assign the coupling coefficients: s ij = N M m= σ m i σ m j. (2) (2) Assign the initial state v i (0). (3) Update the state of the network: v i (n + ) = sgn s ij v j (n). (3) where sgn(x) = { if x 0, if x < 0. The above associative memory will converge to the desired pattern if M N [Müller & Reinhar, 990]. To make this paper somewhat self-contained, we will prove that the patterns σ k (k =, 2,..., N) are the fixed points of Eq. (3) (for more details, see [Müller & Reinhar, 990]). Substituting σj k into v j (n) in Eq. (3), we obtain v i (n + ) = sgn s ij v j (n) = sgn s ij σj k [ = sgn N = sgn N + M σ k i m=, m k σ m i = sgn σ i k + N M m= σ m i σ m j σj k σj k M σ m j σk j m=, m k σ m i ] σ k j σ m j σk j. If the M patterns are uncorrelated, then the second term M σi m σj m σj k, (5) N m=, m k is estimated to be of size (M )/N [Müller & Reinhar, 990]. Therefore, it will most likely not affect the sign of σi k as long as M N, i.e. the number of stored patterns is much smaller than the number of bits. Then, we have v i (n + ) = sgn s ij σj k = σi k, (6) which implies that σ k (k =, 2,..., N) are the fixed points of Eq. (3). If the stored patterns are orthogonal to each other, we obtain N i= (4) σ m i σ m i = δ mm. (7) where δ mm = if m = m and δ mm = 0 if m m. In this case, Eq. (5) vanishes, and so the patterns σ k (k =, 2,..., N) also become the fixed points of Eq. (3). Furthermore, if the first p cells start out

5 Star Cellular Neural Networks for Associative and Dynamic Memories 729 in the wrong state, then we get the relation ( [ v i (n + ) = sgn p ] σi k N + N M m=, m k σ m i σj m σj k. (8) Under the condition n, p N, we still have v i (n + ) = sgn s ij σj k = σi k. (9) That is, the cell will converge to the desired pattern within a single update. 3. Star CNN for Associative Memories In this section, we design our Star CNN for associative memories. Let us define first the dynamics of an nth-order cell by dx i dx 2 i. dx n i = f i (x i, x2 i,..., xn i ), = fi 2(x i, x2 i,..., xn i ),. = fi n(x i, x2 i,..., xn i ), (i =, 2,..., N) (0) where the subscript i denotes the ith cell and fi, f i 2,..., f i n are functions of (x i, x2 i,..., xn i ). The dynamics of an nth-order Star CNN is defined by dx i = f i (x i, x2 i,..., xn i )+u i, dx 2 i = f 2 i (x i, x2 i,..., xn i ),. dx n i = f n i (x i, x2 i,..., xn i ),. (i=, 2,..., N) () where u i denotes the input of the ith cell. The central system receives the signal x i from the ith cell and supplies the signal u i to each cell. 3.. First-order Star CNNs The dynamics of a first-order Star CNN is governed by a system of n differential equations = x i + u i, (i =, 2,..., N) (2) where x i and u i denote the state and the input of the ith cell, respectively. The output y i and the state x i of each cell are related through the standard piecewise-linear saturation function [Chua & Roska, 2002]: y i = h(x i ) = 2 ( x i + x i ). (3) The central system receives these output y i = h(x i ) and supplies the following signal to each cell u i = a ij y j = a ij h(x j ). (4) Substituting u i into Eq. (2), we obtain the dynamics of a fully-connected CNN without input and threshold parameters [Chua, 998]: = x i + a ij y j = x i + a ij h(x j ). (5) Thus, the dynamics of the systems (2) and (5) are equivalent. Since a fully-connected CNN has a mesh topology and self loops, it needs N(N + )/2 connections, whereas a Star CNN needs only N connections (see Fig. 3). Next, we show that the Star CNN (2) can function as associative memories, if we modify the output y i and the signal u i as follows y i = h(x i ) = sgn(x i ), u i = sgn s ij y j = sgn s ij sgn(x j ). (6) Substituting Eq. (6) into Eq. (2), we obtain = x i + sgn s ij sgn(x j ). (7) If the initial condition satisfies x j (0) = σ k j, (j =, 2,..., N) (8)

6 730 M. Itoh & L. O. Chua Local cell Self loop (a) Central system Master cell Local cell (b) Fig. 3. (a) Fully-connected CNN with 25 cells. (b) Star CNN with 26 cells.

7 Star Cellular Neural Networks for Associative and Dynamic Memories 73 then we have = x i + sgn s ij sgn(σj k ) = x i + sgn s ij σj k [ = x i + sgn N = x i + sgn N + M m=, m k σ m i σ k i = x i +sgn σ i k + N M m= σ m j σk j σ m i σm j σ k j σk j M m=, m k σ m i ] σ k j σ m j σk j. (9) If the patterns are uncorrelated, the second term M σi m σj m N σk j, (20) m=, m k is estimated to be of size (M )/N [Müller & Reinhar, 990]. Therefore, it will not affect the sign of σi k as long as M N. Thus, σi k becomes an equilibrium point, and y i = sgn(σi k) = σk i at this point. Furthermore, if the first p cells start out in the wrong initial state, then we get = x i + sgn + M N ([ p N m=, m k σ m i ] σi k σj m σj k. (2) Under the condition M, p N, y j (t) = sgn(x j (t)) converges to σi k as t increases. Thus, the Star CNN (7) has the functions of associative memories. Some computer simulations of the above associative memories are given in Figs. 4 and 5. The four stored patterns are shown in Figs. 4(a) 4(d). Observe that since ( σi m)( σm j ) = σm i σm j in Eq. (2), it follows that the coupling coefficient s ij defined by Eq. (2) is the same for each pattern σ i and its complement σ i. In other words, if σ i is a fixed point of Eq. (7), then the complementary pattern σ i is also a fixed point even though it is not included in Eq. (2) as a separate pattern. It follows that the four complementary patterns shown in Figs. 4(e) 4(h) are also stored as associate memory in Eq. (7). (a) (b) (c) (d) (e) (f) (g) (h) Fig. 4. Stored patterns for Star CNN associative memories. (a) (d) Stored patterns, (e) (h) stored reverse patterns.

8 732 M. Itoh & L. O. Chua (a) (b) (c) (d) (i) (e) (f) (g) (h) (j) (k) Fig. 5. Figure Retrieved 5: patterns. Retrieved The patterns. first, third The andfirst, fifththird, rows indicate and fiftherows initialindicate patterns, the andinitial the second, patterns, fourth and anhe sixth rows indicate the recognized patterns. second, fourth, and sixth rows indicate the recognized patterns.

9 Star Cellular Neural Networks for Associative and Dynamic Memories 733 The outputs of the blue and red cells represent the binary values and, respectively. The coupling coefficients s ij for the stored patterns are given in Appendix A. Observe that the Star CNN (7) can indeed retrieve the patterns. Observe also that some additional patterns [(i) (k)] which differ from the original set of stored patterns are retrieved as well in Fig. 5. The patterns (i) and (j) are called (a) (b) (g) (i) (e) (f) (c) (j) Fig. 6. Two spurious patterns generated by the Star CNN (7). The spurious patterns (i) and (j) are obtained by applying a majority rule to the stored patterns {(a), (b), (g)} and {(e), (f), (c)}, respectively.

10 734 M. Itoh & L. O. Chua spurious patterns, which are derived from the stored patterns [Christos, 2003]. That is, the spurious patterns (i) and (j) are obtained by applying a majority rule to the stored patterns {(a), (b), (g)} and {(e), (f), (c)}, respectively (see Fig. 6). The number of spurious patterns grows as we store more patterns [Christos, 2003]. If an inappropriate initial pattern is given, then the Star CNN (7) cannot pinpoint a particular pattern, and gives an output pattern (k) which bears no relationship to the stored patterns. In this paper, a forward Euler algorithm with a time step size t = 0.0 is applied to all the computer simulations. The output of each cell is given by y j (x) = sgn(x j ), (22) and the 25 Star CNN cells are arranged as follows: The symbol i indicates the ith cell Second-order Star CNNs (23) The dynamics of a second-order cell (oscillator) is written as = f(x i, y i ), (i =, 2,..., N) (24) = g(x i, y i ), where f and g are functions of x i and y i. Suppose now that the following system = f(x i, y i ) + d(x 0 x i ), (i =, 2,..., N) = g(x i, y i ), dynamics (25) = f(x i,y i )+u i, = g(x i,y i ), will synchronize when the coupling coefficient d (> 0) is sufficiently large and the signal x 0 is obtained from the master cell dx 0 = f(x 0, y 0 ), (26) dy 0 = g(x 0, y 0 ). It is known that the system (25) will synchronize if the Lyapunov exponents of the following difference system are all negative d [ ] x = y f(x 0, y 0 ) x 0 d f(x 0, y 0 ) y 0 g(x 0, y 0 ) g(x 0, y 0 ) x 0 y 0 [ ] x y (27) where x = x i x 0, y = y i y 0 (for more details, see [Madan, 993]). Furthermore, we suppose that each cell has a stable limit cycle which is symmetric with respect to the origin; namely, if (ϕ i (t), ψ i (t)) is a limit cycle solution, then ( ϕ i (t), ψ i (t)) is also a solution. Thus, we can also use the signal x 0 (t) as a driving signal, and the following system will also synchronize in anti-phase = f(x i, y i )+d( x 0 x i ), = g(x i, y i ), (i=, 2,..., N) (28) The above symmetric condition is satisfied if f and g satisfy f(x } i, y i ) = f( x i, y i ), (29) g(x i, y i ) = g( x i, y i ). The dynamics of a second-order Star CNN for associative memories is defined by (i=, 2,, N) (30) where u i denotes the input of the ith cell. The central system receives the signal x i from the ith cell and

11 Star Cellular Neural Networks for Associative and Dynamic Memories 735 supplies the following signal to each cell signal u i = d(m i x 0 x i ), (i =, 2,, N) (3) where the parameter d > 0 is assumed to be sufficiently large, and m i = sgn s ij w j, (32) where w j = sgn(x 0 x j ). (33) The signal x 0 is obtained from the master cell (26) and the function w j gives the phase relation between the master cell and the jth cell, that is, { if x0 x j 0 w j = (34) if x 0 x j < 0 [ m i = sgn a ij w j = sgn a ij σj = sgn N = sgn N M m= σ m i σm j σ j If we set w j = sgn(y 0 y j ), then we can also determine the phase relation using y 0 and y j. If the second-order Star CNN (for associative memories) has N local cells and one master cell, the Star CNN needs N + connections, while the fully-connected CNN with N cells has N(N + )/2 connections. Therefore, if the number of local cells are equal to 25, namely, N = 25, the fully-connected CNN has 325 connections, whereas the Star CNN has only 26 connections. Observe that the Star CNN is much simpler compared to a fully-connected CNN, as shown in Fig. 3. Next, we study the phase relation between the local and the master cells. In the period satisfying w j = sgn(x 0 (t)x j (t)) = σj, we have = sgn σi + N M m σ m i M m= σ m i σ m j σ m j σ j ] σ j (35) If the stored patterns are uncorrelated, then the term M σi m σj m N σ j (36) m will be estimated to be of size (M )/N [Müller & Reinhar, 990]. If the number of stored patterns is much smaller than the total number of cells (M N), this term becomes sufficiently small compared with σi. Thus, we have m i = σi, u i = d(σi x (37) 0 x i ). If σ i = (resp. σ i = ), then the driving signal is written as u i = d(x 0 x i ), (resp. u i = d( x 0 x i )) (38) and therefore the ith cell tends to synchronize with the master cell in phase (resp. in anti-phase). Note that the attractor corresponding to the stored pattern σ has ith and jth cell synchronized in phase when σ i = σ j, and synchronized in anti-phase when σ i = σ j. Since σ and σ define the same pattern of phase relation [Hoppenstea & Izhikevich, 999], the Star CNN with the output function sgn(x i (t)) oscillates between σ and σ periodically. Thus, the Star CNNs can store patterns in the form of synchronized states with appropriate phase relations between the oscillators.

12 736 M. Itoh & L. O. Chua Observe that if we substitute Eqs. (3) (33) into Eq. (30), we would obtain the following fully-connected CNN: = f(x i, y i ) + d sgn s ij sgn(x 0 x j ) x 0 x i, = g(x i, y i ). (i =, 2,..., N) (39) Here, the signal x 0 is obtained from the following master cell dx 0 = f(x 0, y 0 ), (40) dy 0 = g(x 0, y 0 ). Let us consider next a simplified version of the second-order Star CNN (30). In this scheme, the system can be built with fewer circuit elements. The dynamics of the simplified Star CNN is governed by a first-order differential equation dynamics = g(u i, y i ), (i =, 2,, N) (4) where the input u i is given by signal u i = m i x 0, (i =, 2,, N). (42) The signal x 0 is obtained from the master cell and m i is obtained from m i = sgn s ij w j, (43) where w j = sgn(y 0 y j ). (44) The phase relation is obtained from y i, and the driving signal x 0 is obtained from the master cell (26). The local cells will synchronize, if the Lyapunov exponent of the following difference system is negative d y = g(x 0, y 0 ) y i y, (i =, 2,..., N) (45) where y = y i y 0. For more details, see [Pecora & Carroll, 990]. In this system, all cells are firstorder cells except the master cell. The dynamics of the master cell is given by the second-order system dx 0 = f(x 0, y 0 ), (46) dy 0 = g(x 0, y 0 ). Another interesting neural network for associative memories consists of oscillators having different frequencies, and connected weakly via a common media (bus) forced by an external input [Hoppenstea & Izhikevich, 999]. The dynamics of this neural network is not defined on the usual state space, but on the phase space. The relationship between our second-order Star CNN and the above neurocomputer is described in Appendix B. We now present two examples of second-order Star CNN cells:

13 Star Cellular Neural Networks for Associative and Dynamic Memories 737 () Two-cell CNNs [Zhou & Nosseck, 99] = x i + ph(x i ) + rh(y i ), = y i + sh(x i ) + qh(y i ), where p, q, r, and s are constants and h(x i ) = 2 ( x i + x i ), (47) (for example, p =., q =., r = 2, s = 2). (2) Piecewise-linear Van der Pol oscillator [Itoh & Chua, 2003] = α(y i x i + 2h(x i )), = βx i, where α and β are constants and h(x i ) = 2 ( x i + x i ), (for example, α = 2, β = 2) Third-order Star CNNs (48) The dynamics of a chaotic third-order Star CNN cell is defined by = f(x i, y i, z i ), = g(x i, y i, z i ), (49) dz i = h(x i, y i, z i ), where f, g and h are functions of x i, y i and z i. We assume that each cell has a chaotic attractor which is symmetric with respect to the origin, and the following system = f(x i, y i, z i )+d(x 0 x i ), = g(x i, y i, z i ), dz i = h(x i, y i, z i ), (i=, 2,..., N) (50) will synchronize when the coupling coefficient d is assumed to be sufficiently large, and the driving signal x 0 is obtained from the master chaotic cell dx 0 = f(x 0, y 0, z 0 ), dy 0 = g(x 0, y 0, z 0 ), (5) dz 0 = h(x 0, y 0, z 0 ). The system (50) will synchronize if the Lyapunov exponents of the following difference system are all negative x d y z f(x 0, y 0, z 0 ) d f(x 0, y 0, z 0 ) f(x 0, y 0, z 0 ) x 0 y 0 z 0 = g(x 0, y 0, z 0 ) g(x 0, y 0, z 0 ) g(x 0, y 0, z 0 ) x 0 y 0 z 0 h(x 0, y 0, z 0 ) h(x 0, y 0, z 0 ) h(x 0, y 0, z 0 ) x 0 y 0 z 0 x y (52) z where x = x i x 0, y = y i y 0, z = z i z 0 (for more details, see [Madan, 993]). The dynamics of a third-order Star CNN for associative memories is defined by dynamics = f(x i,y i,z i )+u i, = g(x i,y i,z i ), (i=, 2,, N) (53) dz i = h(x i,y i,z i ),

14 738 M. Itoh & L. O. Chua where u i denotes the input of the ith cell. The central system receives the output x i from the ith cell and supplies the following signal to each cell signal u i = d(m i x 0 x i ), (i =, 2,, N) (54) where the parameter d is assumed to be positive and sufficiently large and m i = sgn s ij w j, (55) where w j = sgn(x 0 x j ). (56) The signal x 0 is obtained form the master cell (5) and w j gives the phase relation between the master and the local cells. Observe that if we substitute Eqs. (54) (56) into Eq. (53), we would obtain the following fully-connected CNN: = f(x i, y i, z i ) + d sgn s ij sgn(x 0 x j ) x 0 x i, = g(x i, y i, z i ), (i =, 2,..., N) (57) dz i = h(x i, y i, z i ). Here, the signal x 0 is obtained from the following master cell dx 0 = f(x 0, y 0, z 0 ), dy 0 = g(x 0, y 0, z 0 ), (58) dz 0 = h(x 0, y 0, z 0 ). A simplified version of the third-order Star CNN (5) is described by the second-order differential equation: dynamics dz i = g(u i, y i, z i ), = h(u i, y i, z i ), (i =, 2,, N) (59)

15 Star Cellular Neural Networks for Associative and Dynamic Memories 739 where the input signal u i is given by signal u i = m i x 0, (i =, 2,, N). (60) The signal x 0 is obtained from the master cell and m i is obtained from m i = sgn s ij w j, (6) where w j = sgn(y 0 y j ). (62) In this system, we assumed that the following local cells = g(x 0, y i, z i ), (, 2,..., N) (63) dz i = h(x 0, y i, z i ). driven by x 0 will synchronize. This can be satisfied if the Lyapunov exponents of the following difference system are all negative d [ ] y = z g(x 0, y 0, z 0 ) g(x 0, y 0, z 0 ) y 0 z 0 h(x 0, y 0, z 0 ) h(x 0, y 0, z 0 ) y 0 z 0 [ ] y (64) z where y = y i y 0, z = z i z 0 (for more details, see [Pecora & Carroll, 990]). Finally, we present two examples of chaotic oscillators which can be exploited for associative memories: () Chua s oscillator [Madan, 993] = α(y i x i k(x i )), = x i y i + z i, dz i = βy i γz i, where α, β, and γ are constants and (65) k(x i ) = bx i + 2 (a b)( x i +.0 x i.0 ), (66) (for example, α = 0, β = 0.5, γ = 0.45, a =.22, b = 0.734). (2) Canonical Chua s oscillator [Itoh & Chua, 2003] where α, β, and γ are constants and = α(x i + y i ), = z i x i, dz i = β(y i + k(z i )), (67) k(z i ) = bz i + 2 (ã b)( z i +.0 z i.0 ). (68) (for example, α = 0.3, β = 2, ã = 2, b = 4).

16 740 M. Itoh & L. O. Chua See [Madan, 993] and [Itoh et al., 994] for the chaotic synchronization of these oscillators Computer simulations of associative memories Some computer simulations of associative memories using Chua s oscillators are given in Fig. 7. The four complementary stored patterns are given in Fig. 4. Observe that the Star CNN (53) can retrieve the patterns as synchronized oscillating patterns. Figure 8 shows the waveforms of the two local cells. Figure 9 shows the synchronization of the local and master cells. Figure 0 shows the relationship between the output waveforms and the sequence of patterns. Additional spurious patterns are also memorized as shown in Fig.. The simplified Star CNN (59) can retrieve not only the stored patterns, but also generate some spurious patterns, as shown in Fig.. We have also applied the Star CNNs with associative memories to other kinds of oscillators, namely, canonical Chua s oscillators, two-cell CNNs, and piecewise-linear Van der Pol oscillators, and obtained similar results. Note that in these examples, we used the function v j (x) = sgn(x j ) as the output for the cells. oscillatory oscillatory oscillatory oscillatory Fig. 7. Retrieved patterns. The first and second (third) rows indicate the initial patterns and the recognized oscillatory patterns, respectively. Since the output function l i (x) = sgn(x i (t)) is used, the Star CNN oscillates between σ j and σ j periodically. The Star CNN consists of 26 Chua s oscillators (25 local cells and master cell).

17 Star Cellular Neural Networks for Associative and Dynamic Memories x 0 x t t y 0 y t t z 0 z t t r 0 r t t Fig. 8. Waveforms of the 0th and 6 25th cells, which are synchronized with the master cell in phase (left). Waveforms of the th, 2th, 3th, 4th, and 5th cells, which are synchronized with the master cell in anti-phase (right). The symbols x, y, z and r denote x j, y j, z j, and the cell output sgn(x j ), respectively.

18 742 M. Itoh & L. O. Chua x t x 0 (t) x 0 x t t x (t) x (t) x 0 x x x0 (x 0,x ) (x 0,x ) Fig. 9. Synchronization of x and x. The state variables x and x 0 are synchronized in phase, and x and x 0 are synchronized in anti-phase.

19 Star Cellular Neural Networks for Associative and Dynamic Memories r 0 r t t (a) Output of the -0th and 6-25th cells. (b) Output of the -5th cells. t=200 t=204 t=20 t=2 t=26 t=224 t=232 t=242 t=249 t=257 Fig. 0. t=265 t=278 t=284 t=29 t=296 Figure 0: Relationship between the output waveforms and the sequence of patterns. The change of patterns is chaotic. Relationship between the output waveforms and the sequence of patterns. The change of patterns is chaotic.

20 744 M. Itoh & L. O. Chua practical to simulate associative memories for large patterns on a PC. However, under ceratin specialized situations, we can recall an entire image from a much smaller and corrupted subset, as illustrated by the computer simulations shown in Figs. 2 and 3. In this example, only a 3 3 pattern (% of the image) marked by a green square is used for the recognition, and the following palettes are applied to the cell state x i and the cell output v i = sgn(x i ): State x i Color < x i red x i = black < x i < gray scale x i = white x i < blue Output v i Color v i < 0 blue v i 0 red (i) Observe that the three kinds of snails can be identified from only a small partial subset. This example suggests the possibility of the following recognition strategy: oscillatory Partition a given image into several appropriately selected smaller subsets. Choose several partial patterns. Recognize them independently by using Star CNNs. Apply a majority rule if each recognized image is different. (j) Fig.. A recognized pattern which converges to an unstored oscillatory pattern. The first and second (third) rows indicate the initial patterns and the recognized oscillatory patterns, respectively Pattern recognition from a partial pattern of images The number of coupling coefficients s ij increases rapidly as the given image size increases. For example, if the image is given by a pattern (namely, local cells), then we have to obtain a matrix and store = = 0 8 coupling coefficients. In the case of a pattern, we have to obtain a matrix and store = 0 2 coefficients. Thus, it is not In this procedure, the number of coupling coefficients is relatively small. That is, if a given image is partitioned into K partial subsets, then the number of coupling coefficients reduces from N 2 to LN 2 /K 2. Here, the symbols N and L denote the number of total cells and the number of chosen partial patterns, respectively. In Figs. 2 and 3, we have N = 900, K = 00, L =, and so N 2 = and LN 2 /K 2 = 8. Thus, the number of coupling coefficients reduces from to only 8 (that is, 0.0%). 4. Dynamic Memories The Star CNN for associative memories usually converges to a stored pattern, or to a spurious pattern if an input is given. However, memory associations in human brains are not always static. It sometimes

21 Star Cellular Neural Networks for Associative and Dynamic Memories 745 (a) (b) (c) (a) (a) (b) (c) (c) 4 Dynamic Memories (d) (e) (f) (d) (e) (f) (d) (e) (f) Fig. The 2. Figure 2: Relation between global images and partial patterns of images for associative Figure Star Relation 2: CNN Relation between for associative global between images global memories andimages partial usually and patterns partial converges of images patterns for toof associativea images storedfor pattern, memories. associative or (a) (c) tomemories. aglobal spurious images, (d) (f) pattern (a)-(c) partial global images. (d)-(f) partial patterns of images (marked by green squares), which made use (a)-(c) ifpatterns global an input of images (marked by green squares), which made use of associative memories. images. is given. (d)-(f) However, partial patterns memory ofassociations images (marked in human by greenbrains squares), which not always made use static. It sometimes of associative memories. of associative wanders memories. from a certain memory to another memory one after another. Furthermore, wanders a flashfrom of inspiration a certain memory (new pattern) to another sometimes memoryhence one after is called another. a spurious Furthermore, memory. a The flashgeneration of in- Inofthis spurious system, memories the output in human patternbrains of the Star can be CNN appears which is related to known memories, and spiration interpreted (newas pattern) a form of sometimes creative activity appears[christos, which 2003]. can travel around stored patterns and their reverse is related to known memories, and hence is called a patterns, as well as new relevant patterns called In this section, we realize the function of dynamic memories [Adachi & Aihara, 997] by modifying spurious memory. The generation of spurious memories spurious patterns [Christos, 2003]. the in signal human u i brains from the can central be interpreted system. In as this a form system, the output pattern of the Star CNN can travel of creative around stored activity patterns [Christos, and2003]. their reverse patterns, as well as new relevant patterns called spurious 4.. Third-order Star CNNs with patterns In this[christos, section, 2003]. we realize the function of dynamic memories dynamic memories [Adachi & Aihara, 997] by modifying the signal u i from the central system. The dynamics of a third-order Star CNNs with 4. Third-order Star CNNs with dynamic dynamic memories memories is defined by The dynamics of a third-order Star CNNs with dynamic memories is defined by dynamics = f(x i,y i,z i )+u i, = g(x i,y i,z i ), (i=, 2,, N) (69) dz i = h(x i,y i,z i ), where u i denotes the input of the ith cell. The central system receives the output x i from each Star CNN cell and supplies the following signal to the ith cell: signal

22 746 M. Itoh & L. O. Chua (a) (b) (c) 4 Dynamic Memories The Star CNN for associative memories usually converges to a stored pattern, or to a spurious pattern if an input is given. However, memory associations in human brains are not always static. It sometimes wanders from a certain memory to another memory one after another. Furthermore, a flash of inspiration (new pattern) sometimes appears which is related to known memories, and hence is called a spurious memory. The generation of spurious memories in human brains can be interpreted as a form of creative activity [Christos, 2003]. In this section, we realize the function of dynamic memories [Adachi & Aihara, 997] by modifying the signal u i from the central system. In this system, the output pattern of the Star CNN can travel oscillatory oscillatory oscillatory around stored patterns and their reverse patterns, as well as new relevant patterns called spurious patterns [Christos, 2003]. 4. Third-order Star CNNs with dynamic memories The dynamics of a third-order Star CNNs with dynamic memories is defined by dynamics = f(x i,y i,z i )+u i, = g(x i,y i,z i ), (i=, 2,, N) (69) Fig. 3. Retrieved patterns. The first and second (third) rows indicate the initial patterns and the recognized oscillatory output Figure patterns, 3: respectively. Retrieved In patterns. this dz i example, The Chua s first and oscillators second are (third) used as rows local indicate cells. the initial patterns and the recognized oscillatory output = patterns, h(x i,y i,z respectively. i ), In this example, Chua s oscillators are used as local cells. where where u i denotes u i denotes the the input input of the of the ith ith cell. cell. The The central central system system receives receives the the output output x i xfrom i from each each Star Star CNN cell and supplies the following signal to the ith cell: CNN cell and supplies the following signal to the ith cell: signal u i = d( m i x 0 x i ) (i =, 2,, N) (70) where the parameter d is a positive constant and m i is given by m i = s ij w j, (7) where

23 Star Cellular Neural Networks for Associative and Dynamic Memories 747 where the parameter d is a positive constant and m i is given by m i = s ij w j, (7) where w j = sgn(x 0 x j ). (72) The signal x 0 is obtained from the master chaotic cell dx 0 = f(x dy 0 0, y 0, z 0 ), = g(x 0, y 0, z 0 ), (73) dz 0 = h(x 0, y 0, z 0 ). Let us compare m i and m i in Eqs. (55) and (7). Observe that whereas m i in Eq. (55) always assumes binary values, namely, m i = sgn s ij w j = ±, (74) there is no restriction on the amplitude of m i in Eq. (7). Let us study next the dynamics of (7). For the time period of t satisfying w j = sgn(x 0 (t)x j (t)) = (j =, 2,..., N), we get the following relation σ j ( m i = s ij w j = s ij σj = =σ i + N M m σ m i N ) M σi m σj m m= σ j σj m σj =σi +ε (75) where ε = N M m σ m i σj m σ j (76) In this case, the second term ε affects the value of m i. If we assume that M N (i.e. the number of stored patterns is much smaller than the total number of Star CNN cells), then we have ε and m i = σ i + ε σ i, u i = d( m i x 0 x i ) = d(σ i x 0 x i ) + dεx 0 (t) d(σ i x 0 x i ). (77) Thus, the driving signal u i may not synchronize the master and slave cells completely, since the term dεx 0 (t) disturbs the synchronization. The output pattern oscillates in the phase relation corresponding to the pattern σ for a while, but does not converge to the pattern σ. Hence, the output pattern may occasionally travel around the stored patterns and their reverse patterns because chaos exhibits a recurrent property, or may travel around new (spurious) patterns which are related to the stored patterns. Furthermore, there is the case where the output pattern converges to an unexpected pattern or oscillates between some of the stored and unstored patterns. Therefore, it is necessary to carry out extensive numerical simulation of the Star CNN (69). Substituting Eqs. (70) (72) into Eq. (69), we obtain the following fully-connected CNN: = f(x i, y i, z i ) + d s ij sgn(x 0 x j ) x 0 x i, = g(x i, y i, z i ), dz i = h(x i, y i, z i ). (i =, 2,..., N) (78) Here, the signal x 0 is obtained from the following master cell dx 0 = f(x 0, y 0, z 0 ), dy 0 = g(x 0, y 0, z 0 ), dz 0 = h(x 0, y 0, z 0 ). (79)

24 748 M. Itoh & L. O. Chua A simplified version of Eq. (69) may be defined as follows: dynamics dz i = g(u i,y i,z i ), = h(u i,y i,z i ), (i=, 2,, N) (80) where u i is given by signal u i = m i x 0, (i=, 2,, N). (8) The signal x 0 is obtained from the master cell and m i is obtained from where m i = s ij w j, (82) w j = sgn(y 0 y j ). (83) In this case, the driving signal u i is uniquely determined, since u i does not have the parameter d. The dynamics of this system is studied in Appendix C Amplitude of driving signals In this section, we modify the amplitude of m i of Eq. (7). Suppose now that the central system supplies the following signal to the local cells where where u i = d( m i x 0 x i ), (i =, 2,..., N) (84) m i = e s ij w j, (e > ) (85) w j = sgn(x 0 x j ) or sgn(y 0 y j ). (86) Note that the parameter e is added to m i. Then, the attractor of the local cells driven by Eq. (84) is not the same attractor driven by Eq. (70). Furthermore, if we get the phase relation from the equation w j = sgn(y 0 y j ), then the dynamics is not always identical to the ones obtained from the equation w j = sgn(x 0 x j ). However, these two modifications sometimes produce good results for the dynamic memories as shown in the examples of Sec. 4.3 (see also Appendix D). Similarly, we can also apply the following driving signal to Eq. (80) where where u i = m i x 0, (i =, 2,..., N) (87) m i = e s ij w j, (e > ) (88) w j = sgn(y 0 y j ). (89) 4.3. Computer simulations of dynamic memories Several computer simulations of dynamic memories via Chua s oscillators are given in Figs The stored patterns are given in Fig. 4. In this example, we set d = 4, e =. Observe that many new related patterns (spurious patterns) appear between stored patterns and their reverse patterns. The sequence of these patterns is extremely sensitive to initial conditions, and the staying period of spurious patterns is shorter than that of stored patterns. The spurious patterns are usually made up of combinations of stored patterns. Several spurious

25 Star Cellular Neural Networks for Associative and Dynamic Memories 749 t=0 t= t=3 t=4 t=6 t= t=3 t=4 t=6 t=8 t=9 t=2 t=22 t=23 t=24 t=25 t=26 t=28 t=29 t=30 t=3 t=35 t=37 t=38 t=39 t=40 t=4 t=42 t=43 t=44 Fig. 4. Sequence of output patterns of the Star CNN (69) for t [0, 44]. These patterns are sampled every second by using Figure 4: Sequence of output patterns of the Star CNN (69) for t [0, 44]. These patterns are the output function sgn(x i ) and illustrated only when the pattern changes. The Star CNN consists of 26 Chua s oscillators. sampled every second by using the output function sgn(x i ) and illustrated only when the pattern changes. The Star CNN consists of 26 Chua s oscillators.

26 750 M. Itoh & L. O. Chua t=45 t=47 t=48 t=50 t=5 t=52 t=54 t=55 t=56 t=57 t=58 t=59 t=60 t=6 t=62 t=63 t=64 t=65 t=66 t=67 t=68 t=69 t=70 t=7 t=72 t=73 t=74 t=75 t=76 t=77 Figure Fig. 5: 5. Sequenceof of output patterns for fort t [45, [45, 77]. 77].

27 Star Cellular Neural Networks for Associative and Dynamic Memories 75 t=78 t=79 t=80 t=8 t=82 t=83 t=84 t=85 t=86 t=87 t=88 t=89 t=9 t=93 t=94 t=95 t=96 t=97 t=98 t=99 t=00 Figure Fig. 6: 6. Sequence of of output patterns for for t t[78, [78, 00]. 00].

28 752 M. Itoh & L. O. Chua Fig. 7. A set of basic patterns. patterns and their basic components are illustrated in Figs These results are related to the fact that we generally solve problems and generate new ideas by combining various notions, ideas, or memories in our mind, and sometimes have a sudden flash of inspiration. George Christos suggests that without spurious memories, the brain would not be capable of learning anything new, and furthermore it would become obsessed with its own strongest stored memories. Spurious memories help the brain avoid this problem and learn something new, albeit similar in some respect to what is already stored in the network [Christos, 2003]. Let us investigate next the synchronization behavior of local cells in Fig. 2. The local cells are not synchronized completely, since the output pattern occasionally travels around the stored patterns. Figure 22 shows the computer simulation of the simplified Star CNN (80). The output pattern converges to the stored oscillatory pattern, since g(u i, y i, z i ) and h(u i, y i, z i ) in Eq. (80) are linear functions (see Appendix C). Therefore, this system cannot exhibit dynamic memories. The canonical Chua s oscillators can also be used for creating dynamic memories, as shown in Figs. 23 and 24. In this example, the functions u i, m i, and w i are chosen as follows u i = d( m i x 0 x i ), m i = e s ij w j, w j = sgn(z 0 z j ) = { if z0 z j 0 if z 0 z j < 0 (90) where d = 0., e = That is, the phase relation is obtained from z i, but the driving signal is obtained from x i. It is possible to get both the phase relation and the driving signal from x i. A simplified Star CNN (80) based on the canonical Chua s oscillators can also exhibit dynamic memories, since h(u i, y i, z i ) = β(y i + k(z i )) is a nonlinear (piecewise linear) function (see Appendix C) Second-order Star CNNs with dynamic memories In this subsection, we investigate the possibility of creating dynamic memories in second-order Star CNNs. Although autonomous second-order oscillators cannot be chaotic, coupled second-order oscillators may produce chaotic attractors. Thus, the following second-order Star CNN may exhibit dynamic memories: dynamics = f(x i,y i )+u i, = g(x i,y i ), (i=, 2,, N) (9)

29 Star Cellular Neural Networks for Associative and Dynamic Memories 753 or Fig. 8. (Left) Spurious patterns which are made up of combinations of (right) basic patterns. Figure 8: Spurious patterns (left) which are made up of combinations of basic patterns (right).

30 754 M. Itoh & L. O. Chua Figure 9: Spurious patterns (left) which are made up of combinations of basic patterns (right). Fig. 9. (Left) Spurious patterns which are made up of combinations of (right) basic patterns.

31 Star Cellular Neural Networks for Associative and Dynamic Memories 755 Fig. 20. (Left) Spurious patterns which are made up of combinations of (right) basic patterns. Figure 20: Spurious patterns (left) which are made up of combinations of basic patterns (right).

32 756 M. Itoh & L. O. Chua 3 2 x t (a) x t (b) Fig. 2. Synchronization and waveforms of x 9 and x 2. (a) Waveforms x j (t) of the 9th cell (red) and 2th cell (blue) for d = 0. Two cells are desynchronized at t 490, and then synchronized again at t 670. (b) Waveforms x j (t) of the 9th cell (red) and 2th cell (blue) for d = 4. (c) Synchronization of x 9 and x 2 for d = 0. (d) Synchronization of x 9 and x 2 for d = 4.

33 Star Cellular Neural Networks for Associative and Dynamic Memories x x9 (c) x x9 (d) Fig. 2. (Continued )

34 758 M. Itoh & L. O. Chua where u i denotes the input of the ith cell. The central system receives the output x i from the Star CNN cell and supplies the following signal to each cell signal u i = d( m i x 0 x i ), (i =, 2,, N) (92) where where m i = s ij w j, (93) w j = sgn(x 0 x j ) or sgn(y 0 y j ). (94) The signal x 0 is obtained from the master cell dx 0 = f(x 0, y 0 ), dy 0 = g(x 0, y 0 ). Observe that if we substitute Eqs. (92) (94) into Eq. (9), we would obtain a fully-connected CNN: (95) = f(x i, y i ) + d s ij sgn(x 0 x j ) x 0 x i, = g(x i, y i ). (i =, 2,..., N) (96) Here, the signal x 0 is obtained from the following master cell dx 0 = f(x 0, y 0 ), dy 0 = g(x 0, y 0 ). A simplified version of Eq. (9) is defined as follows (97) dynamics = g(u i, y i ), (i = 0,, 2,, N) (98) where the input u i is given by signal u i = m i x 0, (i =, 2,, N) (99)

35 Star Cellular Neural Networks for Associative and Dynamic Memories 759 However, the simplified Star CNN (98) cannot exhibit dynamic memories because Star CNN cells are one-dimensional systems. Similarly, the following signals (with the parameters d and e): (a) where and u i = d( m i x 0 x i ), (02) m i = e s ij w j, (e > ) w j = sgn(x 0 x j ), (03) u i = m i x 0, oscillatory (b) Fig. 22. Output pattern converges to the stored oscillatory pattern. (a) Initial pattern, (b) converged oscillatory pattern. Here, the signal x 0 is obtained from the master cell and m i is obtained from m i = s ij w j, (00) where w j = sgn(y 0 y j ). (0) where m i = e s ij w j, (e > ) (04) w j = sgn(y 0 y j ), (05) can be applied to Eqs. (9) and (98), respectively. Several computer simulations of dynamic memories via two-cell CNNs are illustrated in Figs The stored patterns are given in Fig. 4. In this example, we set d = 0.2, e = Observe that many new related patterns appear between stored patterns and their reverse patterns. In the case of the piecewise-linear Van der Pol oscillators, we could not find appropriate parameter values of d and e for creating dynamic memories. 5. Relationship between Associative and Dynamic Memories In this section, we study the relationship between the dynamics of associative memories and dynamic memories. Consider the following system dynamics = f(x i,y i,z i )+u i, = g(x i,y i,z i ), (i=, 2,, N) (06) dz i = h(x i,y i,z i ),

36 760 M. Itoh & L. O. Chua t=0 t=2 t=3 t=4 t=2 t=3 t=5 t=6 t=8 t=9 t=20 t=2 t=22 t=24 t=25 t=26 t=27 t=28 t=30 t=3 t=32 t=33 t=37 t=38 t=42 t=52 t=53 t=55 t=57 t=58 Fig. 23. Sequence of output patterns of the Star CNN (69) for t [0, 63]. These patterns are sampled every second by using Figure 23: Sequence of output patterns of the Star CNN (69) for t [0, 63]. These patterns are the output function sgn(x i ) and illustrated only when the pattern changes. The Star CNN consists of 26 canonical Chua s oscillators. sampled every second by using the output function sgn(x i ) and illustrated only when the pattern changes. The Star CNN consists of 26 canonical Chua s oscillators.

37 Star Cellular Neural Networks for Associative and Dynamic Memories 76 t=64 t=65 t=66 t=67 t=68 t=69 t=70 t=7 t=75 t=76 t=78 t=83 t=84 t=86 t=87 t=89 t=90 t=92 t=95 t=96-00 Fig. 24. Sequence of output patterns for t [64, 00]. Figure 24: Sequence of output patterns for t [64, 00].

38 762 M. Itoh & L. O. Chua t=0 t= t=3 t=4 t=6 t=7 t=8 t=9 t=0 t= t=2 t=3 t=4 t=5 t=7 t=8 t=9 t=20 t=2 t=22 t=23 t=24 t=25 t=26 t=27 t=28 t=29 t=30 t=3 t=32 Fig. 25. Sequence of output patterns of the Star CNN (9) for t [0, 32]. These patterns are sampled every second by using the output Figurefunction 25: Sequence sgn(x of output patterns of the Star CNN (9) for t [0, 32]. These patterns are i ) and illustrated only when the pattern changes. The Star CNN consists of 26 two-cell CNNs. sampled every second by using the output function sgn(x i ) and illustrated only when the pattern changes. The Star CNN consists of 26 two-cell CNNs.

39 Star Cellular Neural Networks for Associative and Dynamic Memories 763 t=33 t=34 t=35 t=36 t=37 t=38 t=39 t=40 t=4 t=42 t=43 t=44 t=45 t=46 t=47 t=48 t=49 t=50 t=5 t=52 t=53 t=54 t=55 t=56 t=57 t=58 t=59 t=60 t=6 t=62 Fig. 26. Sequence of output patterns for t [33, 62]. Figure 26: of for t [33, 62].

40 764 M. Itoh & L. O. Chua t=63 t=64 t=65 t=66 t=67 t=68 t=69 t=70 t=7 t=72 t=73 t=74 t=75 t=76 t=77 t=78 t=79 t=80 t=8 t=82 t=83 t=84 t=85 t=86 t=87 t=88 t=89 t=90 t=9 t=92 Fig. 27. Sequence of output patterns for t [63, 92]. Figure 27: Sequence of for t [63, 92].

41 Star Cellular Neural Networks for Associative and Dynamic Memories 765 t=93 t=94 t=95 t=96 t=97 t=98 t=99 t=00 Fig. 28. Sequence of output patterns for t [93, 00]. where u i denotes the input of the ith cell. The central system receives the output x i from the local cell and supplies the following signal to each cell signal u i = d(m i x 0 x i ), (i =, 2,, N) (07) where d > 0 and where m i = k e s ij w j, (08) w j = sgn(x 0 x j ). (09) Here, the symbol k( ) denotes a scalar function, w j specifies the phase relation between the master and the local cells, and x 0 is obtained from the master cell dx 0 = f(x 0, y 0, z 0 ), dy 0 = g(x 0, y 0, z 0 ), dz 0 = g(x 0, y 0, z 0 ). The scalar functions k( ) and the parameters d and e in Eqs. (07) and (08) are defined as follows (0) Memory Type k(x) d e m i Associative memory sgn(x) d e = m i = ± Dynamic memory x d < e < m i <

42 766 M. Itoh & L. O. Chua In the case of associative memories, m i is restricted to the binary number, ±. In contrast, there is no restriction on m i in the case of dynamic memories. Furthermore, the parameter range of d and e is completely different. It is known that the number of spurious patterns arising from static (associative) memories increases with the number of stored patterns [Christos, 2003]. In contrast, a dynamic memory can create many spurious patterns even if we stored only a few patterns. In other words, dynamic memories are more creative than static memories. 6. Communication between Star CNN Cells A chaotic communication system based on Chua s oscillators was proposed [Itoh et al., 994]. This system can be used to communicate between Star CNN cells. In this system, the central system supplies the following driving signal to the Star CNN cells driving signal u i = x 0, (i =, 2,, N) () where the signal x 0 is obtained from the master cell master cell dx 0 dy 0 = f(x 0,y 0,z 0 )+s(t), = g(x 0,y 0,z 0 ), (2) dz 0 = g(x 0,y 0,z 0 ). Here, s(t) indicates the information signal. That is, the information signal is injected to the master cell as a forcing term. The Star CNN cells can recover the information signal s(t) using the following system local cells dz i = g(u i,y i,z i ), = h(u i,y i,z i ), r(t) = du i f(u i,y i,z i ). (i=, 2,, N) (3) An outline of the recovery process is as follows. Suppose that the local cells (3) will synchronize. Then, we have the relation lim y i(t) = y 0 (t), t lim z i(t) = z 0 (t), t (4) and so r(t) = du i f(u i, y i, z i ) dx 0 f(x 0, y 0, z 0 ) = s(t) (for t ) (5)

DESIGNING CNN GENES. Received January 23, 2003; Revised April 2, 2003

DESIGNING CNN GENES. Received January 23, 2003; Revised April 2, 2003 Tutorials and Reviews International Journal of Bifurcation and Chaos, Vol. 13, No. 10 (2003 2739 2824 c World Scientific Publishing Company DESIGNING CNN GENES MAKOTO ITOH Department of Information and

More information

STUDY OF SYNCHRONIZED MOTIONS IN A ONE-DIMENSIONAL ARRAY OF COUPLED CHAOTIC CIRCUITS

STUDY OF SYNCHRONIZED MOTIONS IN A ONE-DIMENSIONAL ARRAY OF COUPLED CHAOTIC CIRCUITS International Journal of Bifurcation and Chaos, Vol 9, No 11 (1999) 19 4 c World Scientific Publishing Company STUDY OF SYNCHRONIZED MOTIONS IN A ONE-DIMENSIONAL ARRAY OF COUPLED CHAOTIC CIRCUITS ZBIGNIEW

More information

Iterative Autoassociative Net: Bidirectional Associative Memory

Iterative Autoassociative Net: Bidirectional Associative Memory POLYTECHNIC UNIVERSITY Department of Computer and Information Science Iterative Autoassociative Net: Bidirectional Associative Memory K. Ming Leung Abstract: Iterative associative neural networks are introduced.

More information

Experimental Characterization of Nonlinear Dynamics from Chua s Circuit

Experimental Characterization of Nonlinear Dynamics from Chua s Circuit Experimental Characterization of Nonlinear Dynamics from Chua s Circuit John Parker*, 1 Majid Sodagar, 1 Patrick Chang, 1 and Edward Coyle 1 School of Physics, Georgia Institute of Technology, Atlanta,

More information

Artificial Intelligence Hopfield Networks

Artificial Intelligence Hopfield Networks Artificial Intelligence Hopfield Networks Andrea Torsello Network Topologies Single Layer Recurrent Network Bidirectional Symmetric Connection Binary / Continuous Units Associative Memory Optimization

More information

7 Rate-Based Recurrent Networks of Threshold Neurons: Basis for Associative Memory

7 Rate-Based Recurrent Networks of Threshold Neurons: Basis for Associative Memory Physics 178/278 - David Kleinfeld - Fall 2005; Revised for Winter 2017 7 Rate-Based Recurrent etworks of Threshold eurons: Basis for Associative Memory 7.1 A recurrent network with threshold elements The

More information

DESYNCHRONIZATION TRANSITIONS IN RINGS OF COUPLED CHAOTIC OSCILLATORS

DESYNCHRONIZATION TRANSITIONS IN RINGS OF COUPLED CHAOTIC OSCILLATORS Letters International Journal of Bifurcation and Chaos, Vol. 8, No. 8 (1998) 1733 1738 c World Scientific Publishing Company DESYNCHRONIZATION TRANSITIONS IN RINGS OF COUPLED CHAOTIC OSCILLATORS I. P.

More information

CHAPTER 3. Pattern Association. Neural Networks

CHAPTER 3. Pattern Association. Neural Networks CHAPTER 3 Pattern Association Neural Networks Pattern Association learning is the process of forming associations between related patterns. The patterns we associate together may be of the same type or

More information

A New Circuit for Generating Chaos and Complexity: Analysis of the Beats Phenomenon

A New Circuit for Generating Chaos and Complexity: Analysis of the Beats Phenomenon A New Circuit for Generating Chaos and Complexity: Analysis of the Beats Phenomenon DONATO CAFAGNA, GIUSEPPE GRASSI Diparnto Ingegneria Innovazione Università di Lecce via Monteroni, 73 Lecce ITALY Abstract:

More information

K. Pyragas* Semiconductor Physics Institute, LT-2600 Vilnius, Lithuania Received 19 March 1998

K. Pyragas* Semiconductor Physics Institute, LT-2600 Vilnius, Lithuania Received 19 March 1998 PHYSICAL REVIEW E VOLUME 58, NUMBER 3 SEPTEMBER 998 Synchronization of coupled time-delay systems: Analytical estimations K. Pyragas* Semiconductor Physics Institute, LT-26 Vilnius, Lithuania Received

More information

7 Recurrent Networks of Threshold (Binary) Neurons: Basis for Associative Memory

7 Recurrent Networks of Threshold (Binary) Neurons: Basis for Associative Memory Physics 178/278 - David Kleinfeld - Winter 2019 7 Recurrent etworks of Threshold (Binary) eurons: Basis for Associative Memory 7.1 The network The basic challenge in associative networks, also referred

More information

A SYSTEMATIC PROCEDURE FOR SYNCHRONIZING HYPERCHAOS VIA OBSERVER DESIGN

A SYSTEMATIC PROCEDURE FOR SYNCHRONIZING HYPERCHAOS VIA OBSERVER DESIGN Journal of Circuits, Systems, and Computers, Vol. 11, No. 1 (22) 1 16 c World Scientific Publishing Company A SYSTEMATIC PROCEDURE FOR SYNCHRONIZING HYPERCHAOS VIA OBSERVER DESIGN GIUSEPPE GRASSI Dipartimento

More information

PULSE-COUPLED networks (PCNs) of integrate-and-fire

PULSE-COUPLED networks (PCNs) of integrate-and-fire 1018 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 15, NO. 5, SEPTEMBER 2004 Grouping Synchronization in a Pulse-Coupled Network of Chaotic Spiking Oscillators Hidehiro Nakano, Student Member, IEEE, and Toshimichi

More information

Anti-synchronization of a new hyperchaotic system via small-gain theorem

Anti-synchronization of a new hyperchaotic system via small-gain theorem Anti-synchronization of a new hyperchaotic system via small-gain theorem Xiao Jian( ) College of Mathematics and Statistics, Chongqing University, Chongqing 400044, China (Received 8 February 2010; revised

More information

Example Chaotic Maps (that you can analyze)

Example Chaotic Maps (that you can analyze) Example Chaotic Maps (that you can analyze) Reading for this lecture: NDAC, Sections.5-.7. Lecture 7: Natural Computation & Self-Organization, Physics 256A (Winter 24); Jim Crutchfield Monday, January

More information

Majid Sodagar, 1 Patrick Chang, 1 Edward Coyler, 1 and John Parke 1 School of Physics, Georgia Institute of Technology, Atlanta, Georgia 30332, USA

Majid Sodagar, 1 Patrick Chang, 1 Edward Coyler, 1 and John Parke 1 School of Physics, Georgia Institute of Technology, Atlanta, Georgia 30332, USA Experimental Characterization of Chua s Circuit Majid Sodagar, 1 Patrick Chang, 1 Edward Coyler, 1 and John Parke 1 School of Physics, Georgia Institute of Technology, Atlanta, Georgia 30332, USA (Dated:

More information

ADAPTIVE SYNCHRONIZATION FOR RÖSSLER AND CHUA S CIRCUIT SYSTEMS

ADAPTIVE SYNCHRONIZATION FOR RÖSSLER AND CHUA S CIRCUIT SYSTEMS Letters International Journal of Bifurcation and Chaos, Vol. 12, No. 7 (2002) 1579 1597 c World Scientific Publishing Company ADAPTIVE SYNCHRONIZATION FOR RÖSSLER AND CHUA S CIRCUIT SYSTEMS A. S. HEGAZI,H.N.AGIZA

More information

In biological terms, memory refers to the ability of neural systems to store activity patterns and later recall them when required.

In biological terms, memory refers to the ability of neural systems to store activity patterns and later recall them when required. In biological terms, memory refers to the ability of neural systems to store activity patterns and later recall them when required. In humans, association is known to be a prominent feature of memory.

More information

Study on Proportional Synchronization of Hyperchaotic Circuit System

Study on Proportional Synchronization of Hyperchaotic Circuit System Commun. Theor. Phys. (Beijing, China) 43 (25) pp. 671 676 c International Academic Publishers Vol. 43, No. 4, April 15, 25 Study on Proportional Synchronization of Hyperchaotic Circuit System JIANG De-Ping,

More information

Generalized projective synchronization of a class of chaotic (hyperchaotic) systems with uncertain parameters

Generalized projective synchronization of a class of chaotic (hyperchaotic) systems with uncertain parameters Vol 16 No 5, May 2007 c 2007 Chin. Phys. Soc. 1009-1963/2007/16(05)/1246-06 Chinese Physics and IOP Publishing Ltd Generalized projective synchronization of a class of chaotic (hyperchaotic) systems with

More information

Handout 2: Invariant Sets and Stability

Handout 2: Invariant Sets and Stability Engineering Tripos Part IIB Nonlinear Systems and Control Module 4F2 1 Invariant Sets Handout 2: Invariant Sets and Stability Consider again the autonomous dynamical system ẋ = f(x), x() = x (1) with state

More information

A New Dynamic Phenomenon in Nonlinear Circuits: State-Space Analysis of Chaotic Beats

A New Dynamic Phenomenon in Nonlinear Circuits: State-Space Analysis of Chaotic Beats A New Dynamic Phenomenon in Nonlinear Circuits: State-Space Analysis of Chaotic Beats DONATO CAFAGNA, GIUSEPPE GRASSI Diparnto Ingegneria Innovazione Università di Lecce via Monteroni, 73 Lecce ITALY giuseppe.grassi}@unile.it

More information

A. The Hopfield Network. III. Recurrent Neural Networks. Typical Artificial Neuron. Typical Artificial Neuron. Hopfield Network.

A. The Hopfield Network. III. Recurrent Neural Networks. Typical Artificial Neuron. Typical Artificial Neuron. Hopfield Network. III. Recurrent Neural Networks A. The Hopfield Network 2/9/15 1 2/9/15 2 Typical Artificial Neuron Typical Artificial Neuron connection weights linear combination activation function inputs output net

More information

Proceedings of Neural, Parallel, and Scientific Computations 4 (2010) xx-xx PHASE OSCILLATOR NETWORK WITH PIECEWISE-LINEAR DYNAMICS

Proceedings of Neural, Parallel, and Scientific Computations 4 (2010) xx-xx PHASE OSCILLATOR NETWORK WITH PIECEWISE-LINEAR DYNAMICS Proceedings of Neural, Parallel, and Scientific Computations 4 (2010) xx-xx PHASE OSCILLATOR NETWORK WITH PIECEWISE-LINEAR DYNAMICS WALTER GALL, YING ZHOU, AND JOSEPH SALISBURY Department of Mathematics

More information

Mathematical analysis of a third-order memristor-based Chua oscillators

Mathematical analysis of a third-order memristor-based Chua oscillators Mathematical analysis of a third-order memristor-based Chua oscillators Vanessa Botta, Cristiane Néspoli, Marcelo Messias Depto de Matemática, Estatística e Computação, Faculdade de Ciências e Tecnologia,

More information

Research Article Diffusive Synchronization of Hyperchaotic Lorenz Systems

Research Article Diffusive Synchronization of Hyperchaotic Lorenz Systems Hindawi Publishing Corporation Mathematical Problems in Engineering Volume 29, Article ID 74546, 4 pages doi:.55/29/74546 Research Article Diffusive Synchronization of Hyperchaotic Lorenz Systems Ruy Barboza

More information

SYNCHRONIZING CHAOTIC ATTRACTORS OF CHUA S CANONICAL CIRCUIT. THE CASE OF UNCERTAINTY IN CHAOS SYNCHRONIZATION

SYNCHRONIZING CHAOTIC ATTRACTORS OF CHUA S CANONICAL CIRCUIT. THE CASE OF UNCERTAINTY IN CHAOS SYNCHRONIZATION International Journal of Bifurcation and Chaos, Vol. 16, No. 7 (2006) 1961 1976 c World Scientific Publishing Company SYNCHRONIZING CHAOTIC ATTRACTORS OF CHUA S CANONICAL CIRCUIT. THE CASE OF UNCERTAINTY

More information

A SYSTEMATIC APPROACH TO GENERATING n-scroll ATTRACTORS

A SYSTEMATIC APPROACH TO GENERATING n-scroll ATTRACTORS International Journal of Bifurcation and Chaos, Vol. 12, No. 12 (22) 297 2915 c World Scientific Publishing Company A SYSTEMATIC APPROACH TO ENERATIN n-scroll ATTRACTORS UO-QUN ZHON, KIM-FUN MAN and UANRON

More information

Synchronization Phenomena of Impulsively-Coupled

Synchronization Phenomena of Impulsively-Coupled Synchronization Phenomena of Impulsively-Coupled van der Pol Oscillators Yoko Uwate and Yoshifumi Nishio and Ruedi Stoop UNI / ETH Zurich, Switzerland Tokushima Univeristy, Japan Introduction (1) Coupled

More information

Computers and Mathematics with Applications. Adaptive anti-synchronization of chaotic systems with fully unknown parameters

Computers and Mathematics with Applications. Adaptive anti-synchronization of chaotic systems with fully unknown parameters Computers and Mathematics with Applications 59 (21) 3234 3244 Contents lists available at ScienceDirect Computers and Mathematics with Applications journal homepage: www.elsevier.com/locate/camwa Adaptive

More information

An Introductory Course in Computational Neuroscience

An Introductory Course in Computational Neuroscience An Introductory Course in Computational Neuroscience Contents Series Foreword Acknowledgments Preface 1 Preliminary Material 1.1. Introduction 1.1.1 The Cell, the Circuit, and the Brain 1.1.2 Physics of

More information

Neural Networks. Associative memory 12/30/2015. Associative memories. Associative memories

Neural Networks. Associative memory 12/30/2015. Associative memories. Associative memories //5 Neural Netors Associative memory Lecture Associative memories Associative memories The massively parallel models of associative or content associative memory have been developed. Some of these models

More information

Experimental and numerical realization of higher order autonomous Van der Pol-Duffing oscillator

Experimental and numerical realization of higher order autonomous Van der Pol-Duffing oscillator Indian Journal of Pure & Applied Physics Vol. 47, November 2009, pp. 823-827 Experimental and numerical realization of higher order autonomous Van der Pol-Duffing oscillator V Balachandran, * & G Kandiban

More information

Generalized projective synchronization between two chaotic gyros with nonlinear damping

Generalized projective synchronization between two chaotic gyros with nonlinear damping Generalized projective synchronization between two chaotic gyros with nonlinear damping Min Fu-Hong( ) Department of Electrical and Automation Engineering, Nanjing Normal University, Nanjing 210042, China

More information

Let s now begin to formalize our analysis of sequential machines Powerful methods for designing machines for System control Pattern recognition Etc.

Let s now begin to formalize our analysis of sequential machines Powerful methods for designing machines for System control Pattern recognition Etc. Finite State Machines Introduction Let s now begin to formalize our analysis of sequential machines Powerful methods for designing machines for System control Pattern recognition Etc. Such devices form

More information

Entrainment Alex Bowie April 7, 2004

Entrainment Alex Bowie April 7, 2004 Entrainment Alex Bowie April 7, 2004 Abstract The driven Van der Pol oscillator displays entrainment, quasiperiodicity, and chaos. The characteristics of these different modes are discussed as well as

More information

SYNCHRONIZATION OF HYPERCHAOTIC CIRCUITS VIA CONTINUOUS FEEDBACK CONTROL WITH APPLICATION TO SECURE COMMUNICATIONS

SYNCHRONIZATION OF HYPERCHAOTIC CIRCUITS VIA CONTINUOUS FEEDBACK CONTROL WITH APPLICATION TO SECURE COMMUNICATIONS International Journal of Bifurcation and Chaos, Vol. 8, No. 10 (1998) 2031 2040 c World Scientific Publishing Company SYNCHRONIZATION OF HYPERCHAOTIC CIRCUITS VIA CONTINUOUS FEEDBACK CONTROL WITH APPLICATION

More information

Stochastic Model for Adaptation Using Basin Hopping Dynamics

Stochastic Model for Adaptation Using Basin Hopping Dynamics Stochastic Model for Adaptation Using Basin Hopping Dynamics Peter Davis NTT Communication Science Laboratories 2-4 Hikaridai, Keihanna Science City, Kyoto, Japan 619-0237 davis@cslab.kecl.ntt.co.jp Abstract

More information

Tracking the State of the Hindmarsh-Rose Neuron by Using the Coullet Chaotic System Based on a Single Input

Tracking the State of the Hindmarsh-Rose Neuron by Using the Coullet Chaotic System Based on a Single Input ISSN 1746-7659, England, UK Journal of Information and Computing Science Vol. 11, No., 016, pp.083-09 Tracking the State of the Hindmarsh-Rose Neuron by Using the Coullet Chaotic System Based on a Single

More information

RICH VARIETY OF BIFURCATIONS AND CHAOS IN A VARIANT OF MURALI LAKSHMANAN CHUA CIRCUIT

RICH VARIETY OF BIFURCATIONS AND CHAOS IN A VARIANT OF MURALI LAKSHMANAN CHUA CIRCUIT International Journal of Bifurcation and Chaos, Vol. 1, No. 7 (2) 1781 1785 c World Scientific Publishing Company RICH VARIETY O BIURCATIONS AND CHAOS IN A VARIANT O MURALI LAKSHMANAN CHUA CIRCUIT K. THAMILMARAN

More information

IDENTIFICATION OF TARGET IMAGE REGIONS BASED ON BIFURCATIONS OF A FIXED POINT IN A DISCRETE-TIME OSCILLATOR NETWORK

IDENTIFICATION OF TARGET IMAGE REGIONS BASED ON BIFURCATIONS OF A FIXED POINT IN A DISCRETE-TIME OSCILLATOR NETWORK International Journal of Innovative Computing, Information and Control ICIC International c 203 ISSN 349-498 Volume 9, Number, January 203 pp. 355 363 IDENTIFICATION OF TARGET IMAGE REGIONS BASED ON BIFURCATIONS

More information

Computational Intelligence Lecture 6: Associative Memory

Computational Intelligence Lecture 6: Associative Memory Computational Intelligence Lecture 6: Associative Memory Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Fall 2011 Farzaneh Abdollahi Computational Intelligence

More information

CHUA S ATOM. Received March 16, 1999; Revised April 20, 1999

CHUA S ATOM. Received March 16, 1999; Revised April 20, 1999 Letters International Journal of Bifurcation and Chaos, Vol. 9, No. 6 (1999) 1153 1157 c World Scientific Publishing Company CHUA S ATOM DANIELA ROMANO,, MIRKO BOVATI and FRANCO MELONI INFM Dipartimento

More information

Artificial Neural Network and Fuzzy Logic

Artificial Neural Network and Fuzzy Logic Artificial Neural Network and Fuzzy Logic 1 Syllabus 2 Syllabus 3 Books 1. Artificial Neural Networks by B. Yagnanarayan, PHI - (Cover Topologies part of unit 1 and All part of Unit 2) 2. Neural Networks

More information

698 Zou Yan-Li et al Vol. 14 and L 2, respectively, V 0 is the forward voltage drop across the diode, and H(u) is the Heaviside function 8 < 0 u < 0;

698 Zou Yan-Li et al Vol. 14 and L 2, respectively, V 0 is the forward voltage drop across the diode, and H(u) is the Heaviside function 8 < 0 u < 0; Vol 14 No 4, April 2005 cfl 2005 Chin. Phys. Soc. 1009-1963/2005/14(04)/0697-06 Chinese Physics and IOP Publishing Ltd Chaotic coupling synchronization of hyperchaotic oscillators * Zou Yan-Li( ΠΛ) a)y,

More information

The Mixed States of Associative Memories Realize Unimodal Distribution of Dominance Durations in Multistable Perception

The Mixed States of Associative Memories Realize Unimodal Distribution of Dominance Durations in Multistable Perception The Mixed States of Associative Memories Realize Unimodal Distribution of Dominance Durations in Multistable Perception Takashi Kanamaru Department of Mechanical Science and ngineering, School of Advanced

More information

A. The Hopfield Network. III. Recurrent Neural Networks. Typical Artificial Neuron. Typical Artificial Neuron. Hopfield Network.

A. The Hopfield Network. III. Recurrent Neural Networks. Typical Artificial Neuron. Typical Artificial Neuron. Hopfield Network. Part 3A: Hopfield Network III. Recurrent Neural Networks A. The Hopfield Network 1 2 Typical Artificial Neuron Typical Artificial Neuron connection weights linear combination activation function inputs

More information

On the synchronization of a class of electronic circuits that exhibit chaos

On the synchronization of a class of electronic circuits that exhibit chaos Chaos, Solitons and Fractals 13 2002) 1515±1521 www.elsevier.com/locate/chaos On the synchronization of a class of electronic circuits that exhibit chaos Er-Wei Bai a, *, Karl E. Lonngren a, J.C. Sprott

More information

Master-Slave Synchronization using. Dynamic Output Feedback. Kardinaal Mercierlaan 94, B-3001 Leuven (Heverlee), Belgium

Master-Slave Synchronization using. Dynamic Output Feedback. Kardinaal Mercierlaan 94, B-3001 Leuven (Heverlee), Belgium Master-Slave Synchronization using Dynamic Output Feedback J.A.K. Suykens 1, P.F. Curran and L.O. Chua 1 Katholieke Universiteit Leuven, Dept. of Electr. Eng., ESAT-SISTA Kardinaal Mercierlaan 94, B-1

More information

Backstepping synchronization of uncertain chaotic systems by a single driving variable

Backstepping synchronization of uncertain chaotic systems by a single driving variable Vol 17 No 2, February 2008 c 2008 Chin. Phys. Soc. 1674-1056/2008/17(02)/0498-05 Chinese Physics B and IOP Publishing Ltd Backstepping synchronization of uncertain chaotic systems by a single driving variable

More information

Impulsive Stabilization for Control and Synchronization of Chaotic Systems: Theory and Application to Secure Communication

Impulsive Stabilization for Control and Synchronization of Chaotic Systems: Theory and Application to Secure Communication 976 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I: FUNDAMENTAL THEORY AND APPLICATIONS, VOL. 44, NO. 10, OCTOBER 1997 Impulsive Stabilization for Control and Synchronization of Chaotic Systems: Theory and

More information

Chapter 4 Integration

Chapter 4 Integration Chapter 4 Integration SECTION 4.1 Antiderivatives and Indefinite Integration Calculus: Chapter 4 Section 4.1 Antiderivative A function F is an antiderivative of f on an interval I if F '( x) f ( x) for

More information

CSC321 Lecture 2: Linear Regression

CSC321 Lecture 2: Linear Regression CSC32 Lecture 2: Linear Regression Roger Grosse Roger Grosse CSC32 Lecture 2: Linear Regression / 26 Overview First learning algorithm of the course: linear regression Task: predict scalar-valued targets,

More information

Fault Tolerance Technique in Huffman Coding applies to Baseline JPEG

Fault Tolerance Technique in Huffman Coding applies to Baseline JPEG Fault Tolerance Technique in Huffman Coding applies to Baseline JPEG Cung Nguyen and Robert G. Redinbo Department of Electrical and Computer Engineering University of California, Davis, CA email: cunguyen,

More information

Hopfield Neural Network

Hopfield Neural Network Lecture 4 Hopfield Neural Network Hopfield Neural Network A Hopfield net is a form of recurrent artificial neural network invented by John Hopfield. Hopfield nets serve as content-addressable memory systems

More information

THE SYNCHRONIZATION OF TWO CHAOTIC MODELS OF CHEMICAL REACTIONS

THE SYNCHRONIZATION OF TWO CHAOTIC MODELS OF CHEMICAL REACTIONS ROMAI J., v.10, no.1(2014), 137 145 THE SYNCHRONIZATION OF TWO CHAOTIC MODELS OF CHEMICAL REACTIONS Servilia Oancea 1, Andrei-Victor Oancea 2, Ioan Grosu 3 1 U.S.A.M.V., Iaşi, Romania 2 Erasmus Mundus

More information

CONTROLLING CHAOTIC DYNAMICS USING BACKSTEPPING DESIGN WITH APPLICATION TO THE LORENZ SYSTEM AND CHUA S CIRCUIT

CONTROLLING CHAOTIC DYNAMICS USING BACKSTEPPING DESIGN WITH APPLICATION TO THE LORENZ SYSTEM AND CHUA S CIRCUIT Letters International Journal of Bifurcation and Chaos, Vol. 9, No. 7 (1999) 1425 1434 c World Scientific Publishing Company CONTROLLING CHAOTIC DYNAMICS USING BACKSTEPPING DESIGN WITH APPLICATION TO THE

More information

Show that the following problems are NP-complete

Show that the following problems are NP-complete Show that the following problems are NP-complete April 7, 2018 Below is a list of 30 exercises in which you are asked to prove that some problem is NP-complete. The goal is to better understand the theory

More information

Synchronization of a General Delayed Complex Dynamical Network via Adaptive Feedback

Synchronization of a General Delayed Complex Dynamical Network via Adaptive Feedback Synchronization of a General Delayed Complex Dynamical Network via Adaptive Feedback Qunjiao Zhang and Junan Lu College of Mathematics and Statistics State Key Laboratory of Software Engineering Wuhan

More information

Chapter 2 Simplicity in the Universe of Cellular Automata

Chapter 2 Simplicity in the Universe of Cellular Automata Chapter 2 Simplicity in the Universe of Cellular Automata Because of their simplicity, rules of cellular automata can easily be understood. In a very simple version, we consider two-state one-dimensional

More information

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition NONLINEAR CLASSIFICATION AND REGRESSION Nonlinear Classification and Regression: Outline 2 Multi-Layer Perceptrons The Back-Propagation Learning Algorithm Generalized Linear Models Radial Basis Function

More information

Degree Master of Science in Mathematical Modelling and Scientific Computing Mathematical Methods I Thursday, 12th January 2012, 9:30 a.m.- 11:30 a.m.

Degree Master of Science in Mathematical Modelling and Scientific Computing Mathematical Methods I Thursday, 12th January 2012, 9:30 a.m.- 11:30 a.m. Degree Master of Science in Mathematical Modelling and Scientific Computing Mathematical Methods I Thursday, 12th January 2012, 9:30 a.m.- 11:30 a.m. Candidates should submit answers to a maximum of four

More information

Anatomy of a Bit. Reading for this lecture: CMR articles Yeung and Anatomy.

Anatomy of a Bit. Reading for this lecture: CMR articles Yeung and Anatomy. Anatomy of a Bit Reading for this lecture: CMR articles Yeung and Anatomy. 1 Information in a single measurement: Alternative rationale for entropy and entropy rate. k measurement outcomes Stored in log

More information

Memristor Based Chaotic Circuits

Memristor Based Chaotic Circuits Memristor Based Chaotic Circuits Bharathwaj Muthuswamy Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No. UCB/EECS-2009-6 http://www.eecs.berkeley.edu/pubs/techrpts/2009/eecs-2009-6.html

More information

Using a Hopfield Network: A Nuts and Bolts Approach

Using a Hopfield Network: A Nuts and Bolts Approach Using a Hopfield Network: A Nuts and Bolts Approach November 4, 2013 Gershon Wolfe, Ph.D. Hopfield Model as Applied to Classification Hopfield network Training the network Updating nodes Sequencing of

More information

Introduction. Previous work has shown that AER can also be used to construct largescale networks with arbitrary, configurable synaptic connectivity.

Introduction. Previous work has shown that AER can also be used to construct largescale networks with arbitrary, configurable synaptic connectivity. Introduction The goal of neuromorphic engineering is to design and implement microelectronic systems that emulate the structure and function of the brain. Address-event representation (AER) is a communication

More information

Synchronization-based parameter estimation from time series

Synchronization-based parameter estimation from time series PHYSICAL REVIEW E VOLUME 54, NUMBER 6 DECEMBER 1996 Synchronization-based parameter estimation from time series U. Parlitz and L. Junge Drittes Physikalisches Institut, Universität Göttingen, Bürgerstra

More information

2009 Spring CS211 Digital Systems & Lab CHAPTER 2: INTRODUCTION TO LOGIC CIRCUITS

2009 Spring CS211 Digital Systems & Lab CHAPTER 2: INTRODUCTION TO LOGIC CIRCUITS CHAPTER 2: INTRODUCTION TO LOGIC CIRCUITS What will we learn? 2 Logic functions and circuits Boolean Algebra Logic gates and Synthesis CAD tools and VHDL Read Section 2.9 and 2.0 Terminology 3 Digital

More information

Neural Nets and Symbolic Reasoning Hopfield Networks

Neural Nets and Symbolic Reasoning Hopfield Networks Neural Nets and Symbolic Reasoning Hopfield Networks Outline The idea of pattern completion The fast dynamics of Hopfield networks Learning with Hopfield networks Emerging properties of Hopfield networks

More information

Secure Communication Using H Chaotic Synchronization and International Data Encryption Algorithm

Secure Communication Using H Chaotic Synchronization and International Data Encryption Algorithm Secure Communication Using H Chaotic Synchronization and International Data Encryption Algorithm Gwo-Ruey Yu Department of Electrical Engineering I-Shou University aohsiung County 840, Taiwan gwoyu@isu.edu.tw

More information

Controlling a Novel Chaotic Attractor using Linear Feedback

Controlling a Novel Chaotic Attractor using Linear Feedback ISSN 746-7659, England, UK Journal of Information and Computing Science Vol 5, No,, pp 7-4 Controlling a Novel Chaotic Attractor using Linear Feedback Lin Pan,, Daoyun Xu 3, and Wuneng Zhou College of

More information

( ) T. Reading. Lecture 22. Definition of Covariance. Imprinting Multiple Patterns. Characteristics of Hopfield Memory

( ) T. Reading. Lecture 22. Definition of Covariance. Imprinting Multiple Patterns. Characteristics of Hopfield Memory Part 3: Autonomous Agents /8/07 Reading Lecture 22 Flake, ch. 20 ( Genetics and Evolution ) /8/07 /8/07 2 Imprinting Multiple Patterns Let x, x 2,, x p be patterns to be imprinted Define the sum-of-outer-products

More information

Neural Networks Lecture 6: Associative Memory II

Neural Networks Lecture 6: Associative Memory II Neural Networks Lecture 6: Associative Memory II H.A Talebi Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Winter 2011. A. Talebi, Farzaneh Abdollahi Neural

More information

SELF-ORGANIZATION IN NONRECURRENT COMPLEX SYSTEMS

SELF-ORGANIZATION IN NONRECURRENT COMPLEX SYSTEMS Letters International Journal of Bifurcation and Chaos, Vol. 10, No. 5 (2000) 1115 1125 c World Scientific Publishing Company SELF-ORGANIZATION IN NONRECURRENT COMPLEX SYSTEMS PAOLO ARENA, RICCARDO CAPONETTO,

More information

ADAPTIVE CONTROLLER DESIGN FOR THE ANTI-SYNCHRONIZATION OF HYPERCHAOTIC YANG AND HYPERCHAOTIC PANG SYSTEMS

ADAPTIVE CONTROLLER DESIGN FOR THE ANTI-SYNCHRONIZATION OF HYPERCHAOTIC YANG AND HYPERCHAOTIC PANG SYSTEMS ADAPTIVE CONTROLLER DESIGN FOR THE ANTI-SYNCHRONIZATION OF HYPERCHAOTIC YANG AND HYPERCHAOTIC PANG SYSTEMS Sundarapandian Vaidyanathan 1 1 Research and Development Centre, Vel Tech Dr. RR & Dr. SR Technical

More information

Neural Networks for Machine Learning. Lecture 11a Hopfield Nets

Neural Networks for Machine Learning. Lecture 11a Hopfield Nets Neural Networks for Machine Learning Lecture 11a Hopfield Nets Geoffrey Hinton Nitish Srivastava, Kevin Swersky Tijmen Tieleman Abdel-rahman Mohamed Hopfield Nets A Hopfield net is composed of binary threshold

More information

Maps and differential equations

Maps and differential equations Maps and differential equations Marc R. Roussel November 8, 2005 Maps are algebraic rules for computing the next state of dynamical systems in discrete time. Differential equations and maps have a number

More information

3. Coding theory 3.1. Basic concepts

3. Coding theory 3.1. Basic concepts 3. CODING THEORY 1 3. Coding theory 3.1. Basic concepts In this chapter we will discuss briefly some aspects of error correcting codes. The main problem is that if information is sent via a noisy channel,

More information

Function Projective Synchronization of Discrete-Time Chaotic and Hyperchaotic Systems Using Backstepping Method

Function Projective Synchronization of Discrete-Time Chaotic and Hyperchaotic Systems Using Backstepping Method Commun. Theor. Phys. (Beijing, China) 50 (2008) pp. 111 116 c Chinese Physical Society Vol. 50, No. 1, July 15, 2008 Function Projective Synchronization of Discrete-Time Chaotic and Hyperchaotic Systems

More information

Modeling and Stability Analysis of a Communication Network System

Modeling and Stability Analysis of a Communication Network System Modeling and Stability Analysis of a Communication Network System Zvi Retchkiman Königsberg Instituto Politecnico Nacional e-mail: mzvi@cic.ipn.mx Abstract In this work, the modeling and stability problem

More information

LYAPUNOV EXPONENTS AND STABILITY FOR THE STOCHASTIC DUFFING-VAN DER POL OSCILLATOR

LYAPUNOV EXPONENTS AND STABILITY FOR THE STOCHASTIC DUFFING-VAN DER POL OSCILLATOR LYAPUNOV EXPONENTS AND STABILITY FOR THE STOCHASTIC DUFFING-VAN DER POL OSCILLATOR Peter H. Baxendale Department of Mathematics University of Southern California Los Angeles, CA 90089-3 USA baxendal@math.usc.edu

More information

SYNCHRONIZATION IN SMALL-WORLD DYNAMICAL NETWORKS

SYNCHRONIZATION IN SMALL-WORLD DYNAMICAL NETWORKS International Journal of Bifurcation and Chaos, Vol. 12, No. 1 (2002) 187 192 c World Scientific Publishing Company SYNCHRONIZATION IN SMALL-WORLD DYNAMICAL NETWORKS XIAO FAN WANG Department of Automation,

More information

Coordinate systems and vectors in three spatial dimensions

Coordinate systems and vectors in three spatial dimensions PHYS2796 Introduction to Modern Physics (Spring 2015) Notes on Mathematics Prerequisites Jim Napolitano, Department of Physics, Temple University January 7, 2015 This is a brief summary of material on

More information

MAS331: Metric Spaces Problems on Chapter 1

MAS331: Metric Spaces Problems on Chapter 1 MAS331: Metric Spaces Problems on Chapter 1 1. In R 3, find d 1 ((3, 1, 4), (2, 7, 1)), d 2 ((3, 1, 4), (2, 7, 1)) and d ((3, 1, 4), (2, 7, 1)). 2. In R 4, show that d 1 ((4, 4, 4, 6), (0, 0, 0, 0)) =

More information

AP Calculus Testbank (Chapter 9) (Mr. Surowski)

AP Calculus Testbank (Chapter 9) (Mr. Surowski) AP Calculus Testbank (Chapter 9) (Mr. Surowski) Part I. Multiple-Choice Questions n 1 1. The series will converge, provided that n 1+p + n + 1 (A) p > 1 (B) p > 2 (C) p >.5 (D) p 0 2. The series

More information

PHY411 Lecture notes Part 5

PHY411 Lecture notes Part 5 PHY411 Lecture notes Part 5 Alice Quillen January 27, 2016 Contents 0.1 Introduction.................................... 1 1 Symbolic Dynamics 2 1.1 The Shift map.................................. 3 1.2

More information

1 Review of di erential calculus

1 Review of di erential calculus Review of di erential calculus This chapter presents the main elements of di erential calculus needed in probability theory. Often, students taking a course on probability theory have problems with concepts

More information

Low Emittance Machines

Low Emittance Machines Advanced Accelerator Physics Course RHUL, Egham, UK September 2017 Low Emittance Machines Part 1: Beam Dynamics with Synchrotron Radiation Andy Wolski The Cockcroft Institute, and the University of Liverpool,

More information

A Highly Chaotic Attractor for a Dual-Channel Single-Attractor, Private Communication System

A Highly Chaotic Attractor for a Dual-Channel Single-Attractor, Private Communication System A Highly Chaotic Attractor for a Dual-Channel Single-Attractor, Private Communication System Banlue Srisuchinwong and Buncha Munmuangsaen Sirindhorn International Institute of Technology, Thammasat University

More information

Some Background Material

Some Background Material Chapter 1 Some Background Material In the first chapter, we present a quick review of elementary - but important - material as a way of dipping our toes in the water. This chapter also introduces important

More information

Time-delay feedback control in a delayed dynamical chaos system and its applications

Time-delay feedback control in a delayed dynamical chaos system and its applications Time-delay feedback control in a delayed dynamical chaos system and its applications Ye Zhi-Yong( ), Yang Guang( ), and Deng Cun-Bing( ) School of Mathematics and Physics, Chongqing University of Technology,

More information

Synchronizing Chaotic Systems Based on Tridiagonal Structure

Synchronizing Chaotic Systems Based on Tridiagonal Structure Proceedings of the 7th World Congress The International Federation of Automatic Control Seoul, Korea, July 6-, 008 Synchronizing Chaotic Systems Based on Tridiagonal Structure Bin Liu, Min Jiang Zengke

More information

A Novel Chaotic Neural Network Architecture

A Novel Chaotic Neural Network Architecture ESANN' proceedings - European Symposium on Artificial Neural Networks Bruges (Belgium), - April, D-Facto public., ISBN ---, pp. - A Novel Neural Network Architecture Nigel Crook and Tjeerd olde Scheper

More information

Experimental Characterization of Nonlinear Dynamics from Chua s Circuit

Experimental Characterization of Nonlinear Dynamics from Chua s Circuit Experimental Characterization of Nonlinear Dynamics from Chua s Circuit Patrick Chang, Edward Coyle, John Parker, Majid Sodagar NLD class final presentation 12/04/2012 Outline Introduction Experiment setup

More information

Christian Mohr

Christian Mohr Christian Mohr 20.12.2011 Recurrent Networks Networks in which units may have connections to units in the same or preceding layers Also connections to the unit itself possible Already covered: Hopfield

More information

Antimonotonicity in Chua s Canonical Circuit with a Smooth Nonlinearity

Antimonotonicity in Chua s Canonical Circuit with a Smooth Nonlinearity Antimonotonicity in Chua s Canonical Circuit with a Smooth Nonlinearity IOANNIS Μ. KYPRIANIDIS & MARIA Ε. FOTIADOU Physics Department Aristotle University of Thessaloniki Thessaloniki, 54124 GREECE Abstract:

More information

RENORMALIZATION OF DYSON S VECTOR-VALUED HIERARCHICAL MODEL AT LOW TEMPERATURES

RENORMALIZATION OF DYSON S VECTOR-VALUED HIERARCHICAL MODEL AT LOW TEMPERATURES RENORMALIZATION OF DYSON S VECTOR-VALUED HIERARCHICAL MODEL AT LOW TEMPERATURES P. M. Bleher (1) and P. Major (2) (1) Keldysh Institute of Applied Mathematics of the Soviet Academy of Sciences Moscow (2)

More information

On the FPGA-based Implementation of the Synchronization of Chaotic Oscillators in Master-Slave Topology?

On the FPGA-based Implementation of the Synchronization of Chaotic Oscillators in Master-Slave Topology? Memorias del Congreso Nacional de Control Automático ISSN: 594-49 On the FPGA-based Implementation of the Synchronization of Chaotic Oscillators in Master-Slave Topology? Omar Guillen-Fernandez Alejandro

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

Implementing Memristor Based Chaotic Circuits

Implementing Memristor Based Chaotic Circuits Implementing Memristor Based Chaotic Circuits Bharathwaj Muthuswamy Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No. UCB/EECS-2009-156 http://www.eecs.berkeley.edu/pubs/techrpts/2009/eecs-2009-156.html

More information