Commun. Theor. Phys. (Beijing, China) 42 (2004) pp. 121 125 c International Academic Publishers Vol. 42, No. 1, July 15, 2004 Effects of Interactive Function Forms and Refractoryperiod in a Self-Organized Critical Model Based on Neural Networks ZHOU Li-Ming and CHEN Tian-Lun Department of Physics, Nankai University, Tianjin 300071, China (Received September 25, 2003) Abstract Based on the standard self-organizing map neural network model and an integrate-and-fire mechanism, we investigate the effect of the nonlinear interactive function on the self-organized criticality in our model. Based on these we also investigate the effect of the refractoryperiod on the self-organized criticality of the system. PACS numbers: 64.60.Ht, 87.10.+e Key words: self-organized criticality, avalanche, neuron networks, refractoryperiod 1 Introduction A few years ago, Bak et al. introduced the concept of the self-organized criticality (SOC) in the sand pile model. [1] From then on, this concept has been widely studied in many extended dissipative systems. [2 4] It is shown that all these large dynamical systems tend to selforganize into a statistically stationary state without intrinsic spatial and temporal scales. This scale-invariant critical state is characterized by a power-law distribution of avalanche sizes. Now some evidences have proved that the brain works at the SOC state. [5] The function of the brain must develop at all times, and the development of the brain is not coded in the DNA, so the brain must not be designed but self-organized. If the brain is at the subcritical state, a neuron s fire will bring a local behavior. If the brain is at the chaos state, a neuron s fire will bring a wide range avalanche. Because the range is too wide to bring the information which transports in the neurons to affect each other, the brain must be at critical state, where the information passes evenly. [6] The brain possesses about 10 10 10 12 neurons. It is easy to understand the fire mechanism of one neuron, but the neurons in the brain are too many for us to understand how the neurons effect each other. We know that a neuron s input depends on the connective intensity between itself and the fire neuron. We can change the connective intensity between itself and the fire neuron to affect the neuron s output in the brain. [6] Based on our previous models, we bring forward a nonlinear functions with many kinds of forms of the interactive function in the integrate-and-fire mechanism. We investigate its influence on the SOC in our model. We know that there are refractoryperiod of the neurons in the brain. This is to say, after a neuron sends out a pulse, the neurons do not fire but get strong excitement in a period of time. [7] In this paper we will investigate the influence of the length of the refractoryperiod on the SOC and the influence of learning on the SOC after we consider the refractoryperiod. 2 Model Our model is a kind of coupled map lattice system based on the standard self-organizing map (SOM) model. [8] It has two layers, the first one is the input layer, which has h neurons, receiving h-dimensional input vector ξ. The second one is the computing layer, which is a two-dimensional square lattice with L L neurons, each connected with h input neurons, and the afferent weight vector ω is an h-dimensional vector. The concrete mechanism of learning is the same as our previous work. [9] According to the neuron dynamical picture of the brain, the essential feature of associative memory process can be described as a kind of integrate-and-fire process. [10] When the membrane potential of a neuron exceeds the threshold, the neuron sends out signals with the form of action potentials and then returns to the rest state (the neuron fire). The signal is transferred to the other neurons by the synapses, which has an excitatory or inhibitory influence on the membrane potential of the receiving cells according to whether the synapses are excitatory or inhibitory respectively. The resulting membrane potential, if also exceeding the threshold, leads to the next step firing, and thus gives an avalanche. In this mechanism, we only consider the computing layer, which represents a sheet of cells occurring in the cortex. For any neuron sited at position (i, j) in the lattice, we give it a dynamical variable V ij, and the membrane potential V ij = 0 and V ij > 0 represent the neuron The project supported by National Natural Science Foundation of China and the Doctoral Foundation of the Chinese Education Commission of Ministry of Education of China under Grant Nos. 60074020 and 90203008 E-mail: zhouliming@eyou.com
122 ZHOU Li-Ming and CHEN Tian-Lun Vol. 42 in a rest state and depolarized state respectively. Here we do not consider the situation of V ij < 0, which represents the neuron in the hyperpolarized state. [10] Driving rule is as follows. Driving a learning step t, an h-dimensional vector ξ is input, we find the winner neuron (i, j ) in the computing layer, according to the formula ξ(t) ω i j (t) = min ξ(t) ω ij (t), (1) where the term ξ(t) ω ij (t) is the distance between ξ(t) and ω ij (t). The adjustment of weight vectors is the same as our previous work. [11] When the winner neuron s dynamical variable V i j exceeds a threshold V th = 1, the neuron (i, j ) is unstable, and it will fire and then returns to rest state (V i j returns to zero). Each of the nearest four neighbors will receive a pulse and its membrane potential V i j will be changed, V i j V i j + 0.25 σ V i j, V i j 0, (2) where 0.25 σ V i j represents the action potential between fire neuron and its nearest neighbors. We assume that it is proportional to V i j. Here, σ is a function, σ = f( ω i j ω i j ), where the term ω i j ω i j is the distance between firing neuron (i, j ) and neuron (i, j ) in the input weight space. We want to use it to represent a general Hebbian rule: If two nearest neighbor neurons responding states for a specific input pattern are similar, the synapse connection between them is strong, otherwise it is weak. Of course, σ = 1 is in regard to the condition of conservation. We use the open boundary condition in the computer simulation. The procedure is as follows: (i) Variable initialization. Here we let h = 2. In a two-dimensional input space, we create many input vectors, whose elements are uniformly distributed in the region [(0, 1); (0, 1)]. Randomly initialize the afferent weight vectors between [(0, 1); (0, 1)]. Let the dynamical variables V ij distribute randomly in the region [0,1]. (ii) Learning process. During each learning step, a single vector, which is chosen randomly from the input vectors, is input into the network, then the winner neuron is found, and the afferent weight vectors are updated. After some steps, the state of the network reaches a stable and topology preserving case, and the topological structure of the input s space has been learned and stored in the model. (iii) Associative memory and avalanche process. Here we use the sequential update mechanism. (a) Driving Rule: Find out the maximal value V max, and add V th V max to all neurons V ij V ij + (V th V max ). (3) Then the neuron with maximal value is unstable and will fire, an avalanche (associate memory) begins. (b) When there exits any unstable neuron, and the neuron s dynamical variable V i j exceeds the threshold V th = 1, redistribute the dynamical variables and its nearest neighbors according to Eq. (2), till there is no neuron that can fire. Then finish one step, and begin the next step. (c) Repeat step (b) until all the neurons of the lattice are stable. Define this process as one avalanche, and define the avalanche size (associate memory size) as the number of all unstable neurons in this process. (d) Begin step (a) again and another new avalanche (associate memory) begins. We must point out that we let σ = 1 (the condition of conservation), for simplicity in discussing the influence of the refractoryperiod on the SOC behaviors. We let the neuron s refractoryperiod be n time steps. If there exist any unstable neuron V i j V th = 1, and the neuron (i, j ) is for the first time to fire or has passed n steps after the last fire, then the neuron (i, j ) fires. Redistribute the dynamical variables and its nearest neighbors according to Eq. (2). If the neuron (i, j ) has not passed n steps after the last fire, the neuron does not fire, but it receives the action from the nearest neurons. 3 Simulation Results 3.1 Influence of Different Nonlinear Interactive Function on SOC Behaviors In this paper, we let σ(x) = tanh(αx) exp( βx 2 ). α and β can be changed in a wide range, so they can produce many nonlinear functions as shown in Fig. 1. When α = 1 and β = 1 the curve is similar to the trigonometric function (as the curve 1 in Fig. 1); when α = 10 and β = 0.001 the curve is similar to the piecewise linear function (as the curve 2); when α = 10 and β = 100 the curve is similar to the δ function (as the curve 3). We let α = 150 and β = 50 (as the curve 4) in this paper. This function is easy to make the dynamic behavior of system reach the chaos. [12] We use this function and can more easily simulate the working state of the brain. In the function σ(x), α decides mainly the peak value of the function, and β decides mainly the width of the function. For observing the extent and the size of the response of the neurons in the integrate-and-fire mechanism, we show the distribution of the difference of the weight ω between the nearest neurons after learning. As shown in Fig. 2, we can see the peak of the distribution of the difference
No. 1 Effects of Interactive Function Forms and Refractoryperiod in a Self-Organized Critical Model Based on Neural Networks 123 of the weight ω between the nearest neurons mainly from 0.01 to 0.028. than that when α = 50. This is why when α = 150, we can see the power-law behavior of P (S) S τ and τ = 1.28. But when α decreases to 50, the probability of the avalanche size, P (S), decays exponentially with the size of the avalanches, which means there are only localized behaviors (see Fig. 4). Fig. 1 The function σ has many forms. We show that the curve 1 (α = 1, β = 1) is similar to the trigonometric function; the curve 2 (α = 10, β = 0.001) is similar to piecewise linear function; the curve 3 (α = 10, β = 100) is similar to the δ function; the curve 4 (α = 150, β = 50) is the figure which we use in this paper. Fig. 3 This is the interactive function σ between the neurons. We can see clearly the correlation between σ and ω i j ω i j with the parameter α. Fig. 2 The distribution of the weight that has learned. n is the number of the same value of ω i j ω i j. i) The influence of α on the SOC Because α decides mainly the peak value of the function, we can consider that α decides the size of the pulse that makes the neuron (i, j ) fire. With α increasing, the peak value of the function increases, the pulse that makes the neuron (i, j ) fire to the nearest neurons increases, the nearest neurons more easily get threshold value, so the probability of producing big avalanche increases. From Fig. 3 we can see when β = 50 and α varies from 150 to 50, the value of σ that corresponds to the peak of the difference of weight between the nearest lattice is different. Obviously the value of σ under α = 150 is bigger Fig. 4 The influence of α on the SOC. We let α = 150, 100, 50, β = 50, and L = 35. ii) The influence of β on the SOC After learning process, we can find that the neural network s weight self-organizes into a topological map of the input space, so the differences of the weight vector ω between all neurons are limited to a scope from 0 to 0.028 (see Fig. 2). Because β decides mainly the width of the function, with β increasing, the width of the function decreases. It will bring the values of σ that correspond to β = 50, β = 100, and β = 150 to decrease (see Fig. 5). The value which the fire neuron sends to the nearest neurons decreases, so the possibility of the nearest neurons fire decreases too. As shown in Fig. 6, when α = 150 and β = 50,
124 ZHOU Li-Ming and CHEN Tian-Lun Vol. 42 we can see the power-law behaviors of P (S) S τ and τ = 1.28, but when β increases to 150, we can not see the power-law behaviors. the period m > 3, the influence of the refractoryperiod is similar to that of m = 3. This is to say, if the size of the refractoryperiod is bigger than some value, the influence of the refractoryperiod on the SOC behaviors of system is almost invariant. If we let 0.3 millisecond be a time steps, a millisecond is three time steps. This is the same as the delay of the synapse 0.3 ms 1 ms in biology. [7] Fig. 5 This is the interactive function σ between the neurons. We can see clearly the correlation between σ and ω i j ω i j with the parameter β. Fig. 7 The influence of the period of the refractoryperiod on the SOC. We let the period of the refractoryperiod m = 0, 1, 2, 3, 4, 6, 9 and L = 35. ii) The influence of learning on SOC after considering the refractoryperiod Fig. 6 The influence of β on the SOC. We let β = 50, 100, 150, α = 150, and L = 35. 3.2 Influence of the Refractoryperiod on SOC i) The influence of the size of refractoryperiod on SOC. We let the size of the refractoryperiod be m = 1, 2, 3, 4, 6, 9 time steps and compare with the system without refractoryperiod (m = 0). In this process, we use the collateral fire mechanism. From Fig. 7, we can see that the system shows the power-law behavior P (S) S τ, the exponent τ of the system without refractoryperiod is bigger than that of the system with refractoryperiod (see Fig. 7). With the size of refractoryperiod increasing, the exponent τ of the power-law behavior decreases. But when m 3, the exponent τ is almost invariant. When Fig. 8 The influence of learning on the SOC after considering the refractoryperiod. Now we investigate the influence of learning on SOC behavior. We let m = 1 and the neuron s fire conforms to Eq. (2). We find that after considering the refractoryperiod, the system shows a good power-law behavior as shown in Fig. 8. So learning is also important to SOC after considering the refractoryperiod. The conclusion is the same as the result of our previous work. [9] This is because before the learning process, the afferent weights randomly distribute (see Fig. 9), thus they do not respond to the topological structure of the input space. So ω i j ω i j is large, that means the synaptic connections between neighbors neurons is weak. After the learning process, the neighbor neurons responded to the near
No. 1 Effects of Interactive Function Forms and Refractoryperiod in a Self-Organized Critical Model Based on Neural Networks 125 input vectors, and the afferent weights self-organize into a topological map of the input space, so ω i j ω i j is small, that means the synaptic connections between the neighbor neurons are strong. Fig. 9 The distribution of the weight without learning. n is the number of the same value of ω i j ω i j. 4 Conclusion In this paper, we introduce a nonlinear function to the integrate-and-fire mechanism and investigate the influence of parameter α and β in this function on the SOC behaviors of the model. We also investigate the influence of the refractoryperiod, which biologic neuron possess, on the SOC behavior of the model. We find that the influence of the size of the refractoryperiod on the SOC is obvious. But when m > 3, the exponent τ of the power-law is basically invariant. Our work just tries to indicate some relations between the SOC behavior and associate memory process of the brain. It might provide an approach for analyzing the collective behavior of neuron population in the brain. Because there are many kinds of mechanisms in the brain, our model is only a very simple simulation of the brain and many details of neurobiology are ignored. There is still a lot of work to do. References [1] P. Bak, C. Tang, and K. Wiesenfield, Phys. Rev. A38 (1988) 364. [2] Z. Olami, S. Feder, and K. Christensen, Phys. Rev. Lett. 68 (1992) 1244; K. Christense and Z. Olami, Phys. Rev. A46 (1992) 1829. [3] P. Bak and K. Sneppen, Phys. Rev. Lett. 71 (1993) 4083. [4] K. Christensen, H. Flyvbjerg, and Z. Olami, Phys. Rev. Lett. 71 (1993) 2737. [5] T. Gisiger, Biol. Rev. 76 (2001) 161. [6] P. Bak, How Nature Works: the Science of Self-organized Criticality, Springer-Verlag, New York (1996). [7] SUN Jiu-Rong, The Basic Theory of the Brain, Peking University Press, Beijing (2001). [8] T. Kohonen, Proceedings of the IEEE 78 (1990) 1464. [9] ZHAO Xiao-Wei and CHEN Tian-Lun, Phys. Rev. E65 (2002) 026114. [10] CHEN Dan-Mei, et al., J. Phys A: Math. Gen. 28 (1995) 5177. [11] ZHAO Xiao-Wei and CHEN Tian-Lun, Commun. Theor. Phys. (Beijing, China) 40 (2003) 363. [12] J.W. Shuai, Phys. Rev. E56 (1997) 890.