Effects of Interactive Function Forms in a Self-Organized Critical Model Based on Neural Networks

Similar documents
Effects of Interactive Function Forms and Refractoryperiod in a Self-Organized Critical Model Based on Neural Networks

Spatial and Temporal Behaviors in a Modified Evolution Model Based on Small World Network

Self-organized Criticality and Synchronization in a Pulse-coupled Integrate-and-Fire Neuron Model Based on Small World Networks

Self-organized Criticality in a Modified Evolution Model on Generalized Barabási Albert Scale-Free Networks

A Modified Earthquake Model Based on Generalized Barabási Albert Scale-Free

Nonlinear Dynamical Behavior in BS Evolution Model Based on Small-World Network Added with Nonlinear Preference

Self-organized criticality and the self-organizing map

arxiv: v1 [cond-mat.stat-mech] 6 Mar 2008

Hopfield Neural Network and Associative Memory. Typical Myelinated Vertebrate Motoneuron (Wikipedia) Topic 3 Polymers and Neurons Lecture 5

Data Mining Part 5. Prediction

Introduction to Neural Networks

The Sandpile Model on Random Apollonian Networks

Complex Systems Methods 10. Self-Organized Criticality (SOC)

Dynamical Synapses Give Rise to a Power-Law Distribution of Neuronal Avalanches

arxiv:cond-mat/ v1 17 Aug 1994

7 Rate-Based Recurrent Networks of Threshold Neurons: Basis for Associative Memory

7 Recurrent Networks of Threshold (Binary) Neurons: Basis for Associative Memory

Multiobjective Optimization of an Extremal Evolution Model

arxiv: v2 [cond-mat.stat-mech] 6 Jun 2010

Neural Network to Control Output of Hidden Node According to Input Patterns

On self-organised criticality in one dimension

Simple neuron model Components of simple neuron

The Mixed States of Associative Memories Realize Unimodal Distribution of Dominance Durations in Multistable Perception

Controlling chaos in random Boolean networks

Artificial Intelligence Hopfield Networks

Resonance and response

Lecture 7 Artificial neural networks: Supervised learning

Artificial Neural Networks Examination, March 2004

In biological terms, memory refers to the ability of neural systems to store activity patterns and later recall them when required.

Artificial Neural Networks. Edward Gatt

An Introductory Course in Computational Neuroscience

Discussion of Some Problems About Nonlinear Time Series Prediction Using ν-support Vector Machine

Chapter 9: The Perceptron

Analysis of Neural Networks with Chaotic Dynamics

No. 6 Determining the input dimension of a To model a nonlinear time series with the widely used feed-forward neural network means to fit the a

Branislav K. Nikolić

On the avalanche size distribution in the BTW model. Abstract

COMP9444 Neural Networks and Deep Learning 11. Boltzmann Machines. COMP9444 c Alan Blair, 2017

Logic Learning in Hopfield Networks

Neural Networks. Hopfield Nets and Auto Associators Fall 2017

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

Hopfield Networks. (Excerpt from a Basic Course at IK 2008) Herbert Jaeger. Jacobs University Bremen

Lecture 4: Feed Forward Neural Networks

Branching Process Approach to Avalanche Dynamics on Complex Networks

Scale-free network of earthquakes

2015 Todd Neller. A.I.M.A. text figures 1995 Prentice Hall. Used by permission. Neural Networks. Todd W. Neller

Consider the following spike trains from two different neurons N1 and N2:

Probabilistic Models in Theoretical Neuroscience

Phase Desynchronization as a Mechanism for Transitions to High-Dimensional Chaos

Saturation of Information Exchange in Locally Connected Pulse-Coupled Oscillators

Artificial Neural Network

EVOLUTION OF COMPLEX FOOD WEB STRUCTURE BASED ON MASS EXTINCTION

Asynchronous updating of threshold-coupled chaotic neurons

Small-world structure of earthquake network

Artificial Neural Network and Fuzzy Logic

Modelling stochastic neural learning

Self-Organized Criticality (SOC) Tino Duong Biological Computation

Introduction. Previous work has shown that AER can also be used to construct largescale networks with arbitrary, configurable synaptic connectivity.

k k k 1 Lecture 9: Applying Backpropagation Lecture 9: Applying Backpropagation 3 Lecture 9: Applying Backpropagation

Instituto Tecnológico y de Estudios Superiores de Occidente Departamento de Electrónica, Sistemas e Informática. Introductory Notes on Neural Networks

Discussion About Nonlinear Time Series Prediction Using Least Squares Support Vector Machine

Complete Synchronization, Anti-synchronization and Hybrid Synchronization Between Two Different 4D Nonlinear Dynamical Systems

Problem Set Number 2, j/2.036j MIT (Fall 2014)

Neural Nets and Symbolic Reasoning Hopfield Networks

arxiv:cond-mat/ v1 [cond-mat.stat-mech] 16 Jan 2004

Sections 18.6 and 18.7 Artificial Neural Networks

A. The Hopfield Network. III. Recurrent Neural Networks. Typical Artificial Neuron. Typical Artificial Neuron. Hopfield Network.

Balance of Electric and Diffusion Forces

Synchronization of a General Delayed Complex Dynamical Network via Adaptive Feedback

Time-delay feedback control in a delayed dynamical chaos system and its applications

18.6 Regression and Classification with Linear Models

Discrete and Indiscrete Models of Biological Networks

Phase Transitions of an Epidemic Spreading Model in Small-World Networks

Statistical Properties of a Ring Laser with Injected Signal and Backscattering

Information Theory and Neuroscience II

How do biological neurons learn? Insights from computational modelling of

A Simple Model of Evolution with Variable System Size

Synchronization and Bifurcation Analysis in Coupled Networks of Discrete-Time Systems

Hopfield Neural Network

Plasticity and learning in a network of coupled phase oscillators

DEVS Simulation of Spiking Neural Networks

Emergence of resonances in neural systems: the interplay between adaptive threshold and short-term synaptic plasticity

Artificial Neural Networks. Q550: Models in Cognitive Science Lecture 5

Learning and Memory in Neural Networks

Spontaneous recovery in dynamical networks

Analysis of Interest Rate Curves Clustering Using Self-Organising Maps

arxiv:cond-mat/ v1 [cond-mat.stat-mech] 3 May 2000

Markov chains of nonlinear Markov processes and an application to a winner-takes-all model for social conformity

Analysis of second-harmonic generation microscopy under refractive index mismatch

Computational Models of Human Cognition

CN2 1: Introduction. Paul Gribble. Sep 10,

CMSC 421: Neural Computation. Applications of Neural Networks

Tuning tuning curves. So far: Receptive fields Representation of stimuli Population vectors. Today: Contrast enhancment, cortical processing

Model of Motor Neural Circuit of C. elegans

Experimental observation of direct current voltage-induced phase synchronization

) (d o f. For the previous layer in a neural network (just the rightmost layer if a single neuron), the required update equation is: 2.

Anti-synchronization of a new hyperchaotic system via small-gain theorem

Ricepiles: Experiment and Models

Mathematical Models of Dynamic Behavior of Individual Neural Networks of Central Nervous System

arxiv:cond-mat/ v2 [cond-mat.stat-mech] 8 Sep 1999

Transcription:

Commun. Theor. Phys. (Beijing, China) 40 (2003) pp. 607 613 c International Academic Publishers Vol. 40, No. 5, November 15, 2003 Effects of Interactive Function Forms in a Self-Organized Critical Model Based on Neural Networks ZHAO Xiao-Wei, ZHOU Li-Ming, and CHEN Tian-Lun Department of Physics, Nankai University, Tianjin 300071, China (Received February 10, 2003) Abstract Based on the standard self-organizing map neural network model and an integrate-and-fire mechanism, we introduce a kind of coupled map lattice system to investigate scale-invariance behavior in the activity of model neural populations. We let the parameter β, which together with α represents the interactive strength between neurons, have different function forms, and we find the function forms and their parameters are very important to our model s avalanche dynamical behaviors, especially to the emergence of different avalanche behaviors in different areas of our system. PACS numbers: 64.60.Ht, 87.10.+e Key words: self-organized criticality, avalanche, neuron networks 1 Introduction In recent years, an increasing number of systems that present self-organized criticality have been widely investigated. [1 4] It is shown that all these large dynamical systems tend to self-organize into a statistically stationary state without intrinsic spatial and temporal scales. This scale-invariant critical state is characterized by a powerlaw distribution of avalanche sizes. In the central nervous system, there are some evidences of scale-invariant behaviors. [5] So some scientists guess that the brain might operate at, or near, a critical state. [6] Recently, some work (including our own s) has been done to investigate the mechanism of the self-organizing map (SOC) process in the brain, [7 12] and SOC behaviors have been found in these models under some certain conditions. However the brain, which possesses about 10 10 10 12 neurons, is one of the most complex systems and very structured both in form and function. We believe that the neurobiological features of real brain should affect the emergence of scale-invariant behaviors in the brain. So in our previous models, we have considered the effects of the learning process, the difference between individuals, [10,11] and the different specific areas. [12] This time, we try to investigate the effect of different interactive function forms. In our new model, we let the parameter β, which can be used to simulate the interactive strength between neurons with another parameter α, be threshold function or sigmoid function and we find that the function form is important to our model s avalanche dynamical behaviors. 2 Model Our model is a kind of coupled map lattice system based on the standard self-organizing map (SOM) model. [13] The detailed description of the SOM model s structure and learning mechanism can be seen in our previous work. [11] According to the neuron-dynamical picture of the brain, the essential feature of associative memory process can be described as a kind of integrate-and-fire process. [7] So, to grasp the associative memory mechanism and to do the simulation in our model, we add a kind of integrateand-fire mechanism into our model, it is just like what we used in our previous work. [11,12] First, we only consider the computing layer (the square lattice) of the SOM model, representing a sheet of cells occurring in the cortex. For any neuron sited at position (i, j) in the lattice, we give it a dynamical variable V ij, which represents the membrane potential. Now, we can present the computer simulation procedure of this model in detail, here we use the open boundary condition: (i) Variable initialization In a two-dimensional input space, we create many input vectors, whose elements are uniformly or unevenly distributed (depending on different aims of our simulation) in the region [(0, 1); (0, 1)]. Randomly initialize the afferent weight vectors between [(0, 1); (0, 1)]. Let the dynamical variables V ij randomly distribute between [0, 1]. (ii) Learning process During each learning step, a single vector, which is chosen randomly from the input vectors, is input into the network; then the winner neuron is found, and the afferent weight vectors are updated. After some steps, the state of the network reaches a stable and topology preserving case, and the topological structure of the input s space has been learned and stored in the model. (iii) Associative memory and avalanche process Here we use the sequential update mechanism. The project supported by National Natural Science Foundation of China under Grant Nos. 60074020 and 90203008, and the Doctoral Foundation of the Ministry of Education of China Email: xiaoweizhao@eyou.com

608 ZHAO Xiao-Wei, ZHOU Li-Ming, and CHEN Tian-Lun Vol. 40 (a) Driving Rule: Find out the maximal value V max, and add V th V max to all neurons V ij V ij + (V th V max ). (1) Then the neuron with maximal value is unstable and will fire, an avalanche (associate memory) begins. This driving process represents that all the neurons receive continual driving from the external or other parts of the brain. (b) When a neuron s dynamical variable V i j exceeds a threshold V th = 1, the neuron (i, j ) is unstable and it will fire and return to zero. Each of the nearest four neighbors will receive a pulse and its membrane potential V i j will be changed, V i j V i j + α β V i j, V i j 0, (2) where the term α β V i j represents the action potential (interactive strength) between fire neuron and its neighbors. Here, the parameters α and β have similar meanings with the same parameters in our previous work. [11,12] Just like in our previous models, we let β be a function of the distance between firing neuron (i, j ) and neuron (i, j ) in the input weight space: β = σ( ω i j ω i j ), and we want to use it to represent a general Hebbian rule: If two neighbor neurons responding states for a specific input pattern are similar, the synapse connection between them is strong, otherwise it is weak. But the aim of this paper is to study the effect of function form, so we use two different function forms (threshold and sigmoid functions) here. The details can be seen in the next section. (c) Repeat step (b) until all the neurons of the lattice are stable. Define this process as one avalanche, and define the avalanche size (associate memory size) as the number of all unstable neurons in this process. (d) Begin step (a) again and another new avalanche (associate memory) begins. 3 Simulation Results In our previous paper, [12] we have discussed the influence of different specific areas formed by our model s topological learning and found that there are different avalanche distribution in different specific areas of our system. It is an interesting behavior, so in this paper, we want to study what will happen to this behavior when we let β have some different function forms. To simplify the simulation we let the parameter α be a constant, 0.25 and only consider the condition with the learning process (it is just like what we used in Ref. [12]). In our model, we use the parameter β to simulate the influence of synaptical plasticity. Just as we mentioned above, it should represent a general Hebbian rule. In the following section, we will use two function forms: the first is threshold function, the simplest function form that we can imagine to represent the general Hebbian rule; and the second is sigmoid function. As we can see below, the different function forms and their parameters exactly affect our system s avalanche dynamical behaviors. 3.1 Threshold Function First, we let σ(x) be a very simple function form, threshold function, { 1, x θ, σ(x) = (3) 0, x > θ. That is to say: If two neighbor neurons responding states are very similar (the difference between them is smaller than a threshold θ), the synapse connection (interactive) channel between them is completely open, otherwise it is completely close. This is the simplest function form that we can imagine to represent the general Hebbian rule, and this simple type of function is very commonly used in construction of artificial neural networks to represent neuron activation s all-or-none property. Fig. 1 Parameter β = σ(x) uses the threshold function, θ = 2/L and L = 50. (a) The probability of the avalanche size (unstable neurons) P (S) as a function of size S. The curves correspond to avalanches originated from the neurons corresponding to the low-density area and high-density area of the unevenly distributed input space. (b) Mean size S i,j of avalanches originated from each neuron of the lattice. The neurons corresponding to high-density input region are located in the left-up and right-bottom corner of the lattice. The most important and the only parameter is threshold or central point θ (see Fig. 4). We first let θ = 2/L, L = 50, where we introduce the variant 1/L to reduce

No. 5 Effects of Interactive Function Forms in a Self-Organized Critical Model Based on Neural Networks 609 the negative effect of the system size to our integrate-andfire mechanism. Just like in Ref. [12], we let the input (data) space be unevenly distributed in step (i). After the learning process, we can find that the neural net s afferent weight self-organizes into a topological map of the input space, and that some specific areas are formed (This state means that the neighbor neurons of the computing layer respond to the near input vectors, and more neurons respond to the input space s high density region.) We also find similar avalanche dynamical behaviors with paper, [12] that is, the sizes of avalanches originated from the neurons responding to high-density input area obey good powerlaw distribution P (S) S τ, τ 1.03. But for those of low-density area, the size distribution is not a very good power-law behavior, and few large size avalanche happens and the mean avalanches size is small. The results can be seen in Fig. 1. However, with the change of the parameter θ, something happens. To see its effect clearly, we let θ change from 0.1/L to 7/L and let lattice size L be 20, 35, and 50. The result can be seen in Table 1 and Fig. 2. With the increasing of θ the size distribution of avalanches originated from neurons responding to low-density area has more and more obvious power-law behavior, and the mean size of the avalanches also increases. We can see in Figs. 2 and 3 that when θ = 5/L, the avalanches originated both from the high-density area and from the low-density area have similar power-law behavior, exponent τ, and mean avalanche size. That means the neurons responding to low-density area also reach an SOC state, just like those corresponding to high-density area. We can also find in Table 1 and Fig. 2 that with the decreasing of θ the mean size of avalanches from high-density area decrease, the probability of large size avalanches decreases, and the exponent τ increases. Finally the neurons responding to high-density area only have avalanches with size 1, just like those of low-density area. That means these neurons gradually deviate from the SOC state to a localized behavior. So we can conclude that when θ is very large, our whole system reach an SOC state; and when θ is very small, our whole system is in a completely localized state. Only in a certain range of θ, there are different but appropriate avalanche dynamical behaviors in different specific areas. Fig. 2 Parameter β = σ(x) uses the threshold function, L = 20, 35, and 50, respectively. (a) Mean size of avalanches originated from neurons responding to highdensity area ( S h ) and low-density area ( S l ) with different θ. (b) The ratio S h / S l, as a function of θ, which indicates the difference between S h and S l. Fig. 3 The same as Fig. 1, but with parameter θ = 5/L.

610 ZHAO Xiao-Wei, ZHOU Li-Ming, and CHEN Tian-Lun Vol. 40 Table 1 The critical exponent of the model using the threshold function with different threshold θ (The unit is 1/L) and L = 50. For avalanches originated from neurons corresponding to the high-density input area, the critical exponent is τ h ; and for those corresponding to low-density area, it is τ l. The symbol indicates that under the tagged situation, the avalanche behavior is not a very good power-law behavior or even just a localized behavior, and the critical exponent τ is just an approximation to help us understand. Critical exponents τ Threshold θ (1/L) of different areas 0.1 0.5 0.8 1.0 1.5 2.0 2.5 3.5 5.0 8.0 τ h 2.97 1.39 1.15 1.05 1.03 1.03 1.04 1.06 1.06 τ l 7.07 3.73 3.14 1.88 1.89 1.50 1.13 1.07 1.07 Why does this kind of phenomenon happen? After learning, the afferent weights of our neural network selforganize into a topological map of the input space. So in the weight space, responding to the high (low)-density region of the input space, the neurons also have high (low)-density distribution. In the low-density area, the mean size of weight distances between neighbor neurons ω i j ω i j is large, while in the high-density area, the mean size is small. When we let θ be an appropriate value (e.g. θ a = 2.5/L), it is just in the middle of these two mean sizes. So for the neurons responding to high-density area, the parameter β = σ( ω i j ω i j ) is just 1, which means the synaptic connection channel between neighbor neurons is completely open, and for the neurons responding to low-density area, the parameter β = σ( ω i j ω i j ) is 0, which means the synaptic connection channel between neighbor neurons is completely close. So there will be different mean interactive strength between neurons responding to different input regions. As we have discussed in our previous work, as the synaptic connection strength increases, the large size avalanches have more probability to occur and the SOC behavior becomes more obvious. So there are different avalanche dynamical behaviors in different regions. We imagine that, in this way, the brain can appropriately distinguish the different patterns stored in its different specific regions as different SOC attractors. But if the value of θ is not appropriate, the phenomenon will change. When θ is so large (e.g. θ L = 7/L) that it is beyond the mean sizes of weight distances between neighbor neurons responding to the high- and even the low-density area, the parameter β of both of the two areas are 1. So the mean interactive strength of the two areas are strong, and there are similar SOC behaviors in them, the brain will be hard to distinguish the different specific regions in it. And when θ is too small (e.g. θ s = 0.1/L), even smaller than the mean size of weight distances between neighbor neurons responding to the highdensity area, the parameter β of both of the high-density and low-density areas will be 0. So the mean interactive strength of the two areas are very weak, and there are no SOC but only the same localized behaviors in these regions. Figure 4 can help us understand the phenomena clearly. Fig. 4 The sketch of threshold functions with different thresholds. Weight distances x of high-density area and low-density area are distributed around area A and area B, respectively. When threshold is very small (θ s), no matter whether x exists in area A or area B, σ(x) is 0; when threshold is large (θ L), no matter whether x exists in area A or area B, σ(x) is 1; only when threshold is appropriate (θ a), is σ(x) 1 in area A and 0 in area B. 3.2 Sigmoid Function We also let σ(x) be another function form, the sigmoid function. It is a monotonic function with S-shaped graph. Because it can exhibit a graceful balance between linear and nonlinear behavior, it is also very commonly used in the construction of artificial neural networks. The function form used here is defined by σ(x) = 1 1 + exp( γ (θ x)), (4) where θ is the central point, and γ is the slope parameter of the sigmoid function. By varying the parameter γ, we can obtain sigmoid functions of different slopes (see Fig. 8), and when γ is very large, the sigmoid function becomes simply a threshold function with the same central

No. 5 Effects of Interactive Function Forms in a Self-Organized Critical Model Based on Neural Networks 611 point θ. We have discussed the influence of the central point (threshold) θ, so in this subsection we let θ be an appropriate constant 2/L and only consider the effect of slope parameter γ. Firstly, let the lattice size parameter L be 35, and let γ = 10L, we can find in Fig. 5 that the avalanche dynamical behaviors of our system are similar to the situation with the same θ value but using the threshold function. It is easy to understand because the sigmoid function will degenerate to the threshold function with the same θ value when γ is very large. But from Fig. 6, we can see that when we let γ = 0.5L, the neurons corresponding to different input space regions obey similar avalanche size distributions and have similar mean avalanche sizes. From Table 2 and Fig. 7, we can see that, with decreasing of the γ, the mean size of avalanches from neurons corresponding to high density input space also gradually decreases. At last, it tends to the similar mean size of avalanches from neurons responding to low density input space. Fig. 5 Parameter β = σ(x) uses the sigmoid function, γ = 10L, and L = 35. (a) and (b) have the same meanings as in Fig. 1. Table 2 The critical exponent of the model using the sigmoid function with different slope parameter γ (The unit is L) and L = 35. For avalanches originated from neurons corresponding to the high-density input area, the critical exponent is τ h ; and for those responding to low-density area, it is τ l. The symbol has the same meaning as Table 1. Critical exponents τ Slope parameter γ (1 L) of different areas 0.25 0.5 2.0 3.5 5.0 10 τ h 2.18 2.05 1.10 0.97 1.00 1.02 τ l 1.84 2.05 2.22 1.85 1.80 1.74 Fig. 6 The same as Fig. 5, but with γ = 0.5L. Why does this kind of phenomenon happen? We know that, with the decreasing of γ, the slope of sigmoid function also decreases, so the mean value of β = σ(x) between neurons responding to high-density area gradually decreases from 1, and the mean value of β between neurons responding to low-density area gradually increases from 0. When γ is small enough (e.g. γ = 0.5L), the difference between the mean values of β of these two areas is small, so the avalanche dynamical behaviors of the two areas will tend to be similar. Figure 8 can help us understand this phenomenon clearly.

612 ZHAO Xiao-Wei, ZHOU Li-Ming, and CHEN Tian-Lun Vol. 40 Fig. 7 Parameter β = σ(x) uses the sigmoid function, L = 35. (a) Mean size of avalanches originated from neurons corresponding to high-density area ( S h ) and low-density area ( S l ) with different γ. (b) The ratio S h / S l as a function of γ. Fig. 8 The sketch of sigmoid functions with different slope parameters γ. Weight distances x of high-density area and low-density area are distributed around area A and area B, respectively. When slope is large, β = σ(x) is 1 in area A and 0 in area B, but when slope is small, the difference of β s values between area A and area B is small. Fig. 9 The same as Fig. 5, but with γ = 1. There is another phenomenon that we should notice. In Fig. 9, we can see that when the parameter γ is very small (γ = 1), the avalanche size distribution has a peak near the avalanche size of 800, and some particular neurons mean avalanche sizes are very large. This phenomenon is very similar to a state that is called full synchronization in Ref. [14], which means that all avalanches are slowly absorbed into a single one which ends up dominating also the whole lattice. Why does this phenomenon happen? We guess it is the effect of the quenched disorder. After

No. 5 Effects of Interactive Function Forms in a Self-Organized Critical Model Based on Neural Networks 613 learning, although our neural reaches a topological preserving state in weight space, the weight distances x between neurons are not unanimous in our system. It is this inhomogeneity that makes the values of β have fluctuation and adds some quenched disorder in our system. When γ is very small, the slope of sigmoid function is also small, so no matter for neurons respond to high-density area or low-density area, the values of term α β are distributed around 0.125. Just as said in Ref. [14], when α β is small, relatively small quenched disorder will push the system to the full synchronization state. But when γ is large, the value of α β of high-density area is distributed around 0.25, the quenched disorder is too small to push the area from the SOC state to the full synchronization state. We believe it is this reason that the effect of the quenched disorder is obvious when γ is very small but absent when γ is large. To prove our hypothesize, we add some similar quenched disorder like what are used in Ref. [14] into our system, and get the expected results. 4 Conclusion In this paper, we find that the interactive function form between neurons is very important to the avalanche dynamical behaviors, especially to the emergence of different avalanche behaviors in different areas of our system. Only with appropriate function form and parameters (e.g. θ and γ), different areas of our system can display different avalanche behaviors. Just as we mentioned in our previous paper, [12] because of the differences existing in the specific areas of brain, it is hard for us to imagine that the whole brain will only operate at a single universal SOC state. We guess that if there is SOC mechanism existing in the real brain, it might be operating in this way: responding to a special outer pattern (external input signal), only some specific areas of brain reach the SOC state; different patterns are stored in different specific areas as different SOC attractor, and associative memory, no matter triggered by external input signal or by the brain itself, can be designed as the process that these areas of brain evolving into the corresponding SOC attractor. But how does this mechanism distinguish different SOC attractors and how does it prevent the brain from evolving into improper attractors? We guess that the key to this problem is the proper interactive function form: only with the proper form, can the brain have proper associating memory corresponding to special external pattern and can distinguish the large numbers of things around us. Of course, the proper function form and parameters are obtained by our tuning in this work; but for the real brain, we believe, the proper function should be obtained by the brain s lengthy (maybe several hundred millions of years) evolution. The scale invariance in neurobiology is noticed by scientists only in recent time. Theoretical and experimental investigations in this field are only at their beginning. How the scale invariance comes about and how it affects our actions and the way we think are not known. Our work just attempt to indicate some relations between scale invariance behavior and brain dynamical process. We hope it will give some help for the understanding of this exciting field. Of course, it should be noted that our model is only a very simple simulation of brain and many details of neurobiology are ignored. For this reason, there is still a lot of work to do. References [1] P. Bak, C. Tang, and K. Wiesenfield, Phys. Rev. A38 (1988) 364. [2] Z. Olami, S. Feder, and K. Christensen, Phys. Rev. Lett. 68 (1992) 1244; K. Christense and Z. Olami, Phys. Rev. A46 (1992) 1829. [3] P. Bak and K. Sneppen, Phys. Rev. Lett. 71 (1993) 4083. [4] K. Christensen, H. Flyvbjerg, and Z. Olami, Phys. Rev. Lett. 71 (1993) 2737. [5] T. Gisiger, Biol. Rev. 76 (2001) 161. [6] P. Bak, How Nature Works: the Science of Self-Organized Criticality, Springer-Verlag, New York (1996). [7] CHEN Dan-Mei, et al., J. Phys. A: Math. Gen. 28 (1995) 5177. [8] Da Silva, et al., Phys. Lett. A242 (1998) 343. [9] HUANG Jing and CHEN Tian-Lun, Commun. Theor. Phys. (Beijing, China) 33 (2000) 365. [10] ZHAO Xiao-Wei and CHEN Tian-Lun, Commun. Theor. Phys. (Beijing, China) 36 (2001) 351. [11] ZHAO Xiao-Wei and CHEN Tian-Lun, Phys. Rev. E65 (2002) 026114. [12] ZHAO Xiao-Wei and CHEN Tian-Lun, Commun. Theor. Phys. (Beijing, China) 40 (2003) 363. [13] T. Kohonen, Proceedings of the IEEE 78 (1990) 1464. [14] N. Mousseau, Phys. Rev. Lett. 77 (1992) 968.