Effects of Interactive Function Forms and Refractoryperiod in a Self-Organized Critical Model Based on Neural Networks
|
|
- Louise Matthews
- 6 years ago
- Views:
Transcription
1 Commun. Theor. Phys. (Beijing, China) 42 (2004) pp c International Academic Publishers Vol. 42, No. 1, July 15, 2004 Effects of Interactive Function Forms and Refractoryperiod in a Self-Organized Critical Model Based on Neural Networks ZHOU Li-Ming and CHEN Tian-Lun Department of Physics, Nankai University, Tianjin , China (Received September 25, 2003) Abstract Based on the standard self-organizing map neural network model and an integrate-and-fire mechanism, we investigate the effect of the nonlinear interactive function on the self-organized criticality in our model. Based on these we also investigate the effect of the refractoryperiod on the self-organized criticality of the system. PACS numbers: Ht, e Key words: self-organized criticality, avalanche, neuron networks, refractoryperiod 1 Introduction A few years ago, Bak et al. introduced the concept of the self-organized criticality (SOC) in the sand pile model. [1] From then on, this concept has been widely studied in many extended dissipative systems. [2 4] It is shown that all these large dynamical systems tend to selforganize into a statistically stationary state without intrinsic spatial and temporal scales. This scale-invariant critical state is characterized by a power-law distribution of avalanche sizes. Now some evidences have proved that the brain works at the SOC state. [5] The function of the brain must develop at all times, and the development of the brain is not coded in the DNA, so the brain must not be designed but self-organized. If the brain is at the subcritical state, a neuron s fire will bring a local behavior. If the brain is at the chaos state, a neuron s fire will bring a wide range avalanche. Because the range is too wide to bring the information which transports in the neurons to affect each other, the brain must be at critical state, where the information passes evenly. [6] The brain possesses about neurons. It is easy to understand the fire mechanism of one neuron, but the neurons in the brain are too many for us to understand how the neurons effect each other. We know that a neuron s input depends on the connective intensity between itself and the fire neuron. We can change the connective intensity between itself and the fire neuron to affect the neuron s output in the brain. [6] Based on our previous models, we bring forward a nonlinear functions with many kinds of forms of the interactive function in the integrate-and-fire mechanism. We investigate its influence on the SOC in our model. We know that there are refractoryperiod of the neurons in the brain. This is to say, after a neuron sends out a pulse, the neurons do not fire but get strong excitement in a period of time. [7] In this paper we will investigate the influence of the length of the refractoryperiod on the SOC and the influence of learning on the SOC after we consider the refractoryperiod. 2 Model Our model is a kind of coupled map lattice system based on the standard self-organizing map (SOM) model. [8] It has two layers, the first one is the input layer, which has h neurons, receiving h-dimensional input vector ξ. The second one is the computing layer, which is a two-dimensional square lattice with L L neurons, each connected with h input neurons, and the afferent weight vector ω is an h-dimensional vector. The concrete mechanism of learning is the same as our previous work. [9] According to the neuron dynamical picture of the brain, the essential feature of associative memory process can be described as a kind of integrate-and-fire process. [10] When the membrane potential of a neuron exceeds the threshold, the neuron sends out signals with the form of action potentials and then returns to the rest state (the neuron fire). The signal is transferred to the other neurons by the synapses, which has an excitatory or inhibitory influence on the membrane potential of the receiving cells according to whether the synapses are excitatory or inhibitory respectively. The resulting membrane potential, if also exceeding the threshold, leads to the next step firing, and thus gives an avalanche. In this mechanism, we only consider the computing layer, which represents a sheet of cells occurring in the cortex. For any neuron sited at position (i, j) in the lattice, we give it a dynamical variable V ij, and the membrane potential V ij = 0 and V ij > 0 represent the neuron The project supported by National Natural Science Foundation of China and the Doctoral Foundation of the Chinese Education Commission of Ministry of Education of China under Grant Nos and zhouliming@eyou.com
2 122 ZHOU Li-Ming and CHEN Tian-Lun Vol. 42 in a rest state and depolarized state respectively. Here we do not consider the situation of V ij < 0, which represents the neuron in the hyperpolarized state. [10] Driving rule is as follows. Driving a learning step t, an h-dimensional vector ξ is input, we find the winner neuron (i, j ) in the computing layer, according to the formula ξ(t) ω i j (t) = min ξ(t) ω ij (t), (1) where the term ξ(t) ω ij (t) is the distance between ξ(t) and ω ij (t). The adjustment of weight vectors is the same as our previous work. [11] When the winner neuron s dynamical variable V i j exceeds a threshold V th = 1, the neuron (i, j ) is unstable, and it will fire and then returns to rest state (V i j returns to zero). Each of the nearest four neighbors will receive a pulse and its membrane potential V i j will be changed, V i j V i j σ V i j, V i j 0, (2) where 0.25 σ V i j represents the action potential between fire neuron and its nearest neighbors. We assume that it is proportional to V i j. Here, σ is a function, σ = f( ω i j ω i j ), where the term ω i j ω i j is the distance between firing neuron (i, j ) and neuron (i, j ) in the input weight space. We want to use it to represent a general Hebbian rule: If two nearest neighbor neurons responding states for a specific input pattern are similar, the synapse connection between them is strong, otherwise it is weak. Of course, σ = 1 is in regard to the condition of conservation. We use the open boundary condition in the computer simulation. The procedure is as follows: (i) Variable initialization. Here we let h = 2. In a two-dimensional input space, we create many input vectors, whose elements are uniformly distributed in the region [(0, 1); (0, 1)]. Randomly initialize the afferent weight vectors between [(0, 1); (0, 1)]. Let the dynamical variables V ij distribute randomly in the region [0,1]. (ii) Learning process. During each learning step, a single vector, which is chosen randomly from the input vectors, is input into the network, then the winner neuron is found, and the afferent weight vectors are updated. After some steps, the state of the network reaches a stable and topology preserving case, and the topological structure of the input s space has been learned and stored in the model. (iii) Associative memory and avalanche process. Here we use the sequential update mechanism. (a) Driving Rule: Find out the maximal value V max, and add V th V max to all neurons V ij V ij + (V th V max ). (3) Then the neuron with maximal value is unstable and will fire, an avalanche (associate memory) begins. (b) When there exits any unstable neuron, and the neuron s dynamical variable V i j exceeds the threshold V th = 1, redistribute the dynamical variables and its nearest neighbors according to Eq. (2), till there is no neuron that can fire. Then finish one step, and begin the next step. (c) Repeat step (b) until all the neurons of the lattice are stable. Define this process as one avalanche, and define the avalanche size (associate memory size) as the number of all unstable neurons in this process. (d) Begin step (a) again and another new avalanche (associate memory) begins. We must point out that we let σ = 1 (the condition of conservation), for simplicity in discussing the influence of the refractoryperiod on the SOC behaviors. We let the neuron s refractoryperiod be n time steps. If there exist any unstable neuron V i j V th = 1, and the neuron (i, j ) is for the first time to fire or has passed n steps after the last fire, then the neuron (i, j ) fires. Redistribute the dynamical variables and its nearest neighbors according to Eq. (2). If the neuron (i, j ) has not passed n steps after the last fire, the neuron does not fire, but it receives the action from the nearest neurons. 3 Simulation Results 3.1 Influence of Different Nonlinear Interactive Function on SOC Behaviors In this paper, we let σ(x) = tanh(αx) exp( βx 2 ). α and β can be changed in a wide range, so they can produce many nonlinear functions as shown in Fig. 1. When α = 1 and β = 1 the curve is similar to the trigonometric function (as the curve 1 in Fig. 1); when α = 10 and β = the curve is similar to the piecewise linear function (as the curve 2); when α = 10 and β = 100 the curve is similar to the δ function (as the curve 3). We let α = 150 and β = 50 (as the curve 4) in this paper. This function is easy to make the dynamic behavior of system reach the chaos. [12] We use this function and can more easily simulate the working state of the brain. In the function σ(x), α decides mainly the peak value of the function, and β decides mainly the width of the function. For observing the extent and the size of the response of the neurons in the integrate-and-fire mechanism, we show the distribution of the difference of the weight ω between the nearest neurons after learning. As shown in Fig. 2, we can see the peak of the distribution of the difference
3 No. 1 Effects of Interactive Function Forms and Refractoryperiod in a Self-Organized Critical Model Based on Neural Networks 123 of the weight ω between the nearest neurons mainly from 0.01 to than that when α = 50. This is why when α = 150, we can see the power-law behavior of P (S) S τ and τ = But when α decreases to 50, the probability of the avalanche size, P (S), decays exponentially with the size of the avalanches, which means there are only localized behaviors (see Fig. 4). Fig. 1 The function σ has many forms. We show that the curve 1 (α = 1, β = 1) is similar to the trigonometric function; the curve 2 (α = 10, β = 0.001) is similar to piecewise linear function; the curve 3 (α = 10, β = 100) is similar to the δ function; the curve 4 (α = 150, β = 50) is the figure which we use in this paper. Fig. 3 This is the interactive function σ between the neurons. We can see clearly the correlation between σ and ω i j ω i j with the parameter α. Fig. 2 The distribution of the weight that has learned. n is the number of the same value of ω i j ω i j. i) The influence of α on the SOC Because α decides mainly the peak value of the function, we can consider that α decides the size of the pulse that makes the neuron (i, j ) fire. With α increasing, the peak value of the function increases, the pulse that makes the neuron (i, j ) fire to the nearest neurons increases, the nearest neurons more easily get threshold value, so the probability of producing big avalanche increases. From Fig. 3 we can see when β = 50 and α varies from 150 to 50, the value of σ that corresponds to the peak of the difference of weight between the nearest lattice is different. Obviously the value of σ under α = 150 is bigger Fig. 4 The influence of α on the SOC. We let α = 150, 100, 50, β = 50, and L = 35. ii) The influence of β on the SOC After learning process, we can find that the neural network s weight self-organizes into a topological map of the input space, so the differences of the weight vector ω between all neurons are limited to a scope from 0 to (see Fig. 2). Because β decides mainly the width of the function, with β increasing, the width of the function decreases. It will bring the values of σ that correspond to β = 50, β = 100, and β = 150 to decrease (see Fig. 5). The value which the fire neuron sends to the nearest neurons decreases, so the possibility of the nearest neurons fire decreases too. As shown in Fig. 6, when α = 150 and β = 50,
4 124 ZHOU Li-Ming and CHEN Tian-Lun Vol. 42 we can see the power-law behaviors of P (S) S τ and τ = 1.28, but when β increases to 150, we can not see the power-law behaviors. the period m > 3, the influence of the refractoryperiod is similar to that of m = 3. This is to say, if the size of the refractoryperiod is bigger than some value, the influence of the refractoryperiod on the SOC behaviors of system is almost invariant. If we let 0.3 millisecond be a time steps, a millisecond is three time steps. This is the same as the delay of the synapse 0.3 ms 1 ms in biology. [7] Fig. 5 This is the interactive function σ between the neurons. We can see clearly the correlation between σ and ω i j ω i j with the parameter β. Fig. 7 The influence of the period of the refractoryperiod on the SOC. We let the period of the refractoryperiod m = 0, 1, 2, 3, 4, 6, 9 and L = 35. ii) The influence of learning on SOC after considering the refractoryperiod Fig. 6 The influence of β on the SOC. We let β = 50, 100, 150, α = 150, and L = Influence of the Refractoryperiod on SOC i) The influence of the size of refractoryperiod on SOC. We let the size of the refractoryperiod be m = 1, 2, 3, 4, 6, 9 time steps and compare with the system without refractoryperiod (m = 0). In this process, we use the collateral fire mechanism. From Fig. 7, we can see that the system shows the power-law behavior P (S) S τ, the exponent τ of the system without refractoryperiod is bigger than that of the system with refractoryperiod (see Fig. 7). With the size of refractoryperiod increasing, the exponent τ of the power-law behavior decreases. But when m 3, the exponent τ is almost invariant. When Fig. 8 The influence of learning on the SOC after considering the refractoryperiod. Now we investigate the influence of learning on SOC behavior. We let m = 1 and the neuron s fire conforms to Eq. (2). We find that after considering the refractoryperiod, the system shows a good power-law behavior as shown in Fig. 8. So learning is also important to SOC after considering the refractoryperiod. The conclusion is the same as the result of our previous work. [9] This is because before the learning process, the afferent weights randomly distribute (see Fig. 9), thus they do not respond to the topological structure of the input space. So ω i j ω i j is large, that means the synaptic connections between neighbors neurons is weak. After the learning process, the neighbor neurons responded to the near
5 No. 1 Effects of Interactive Function Forms and Refractoryperiod in a Self-Organized Critical Model Based on Neural Networks 125 input vectors, and the afferent weights self-organize into a topological map of the input space, so ω i j ω i j is small, that means the synaptic connections between the neighbor neurons are strong. Fig. 9 The distribution of the weight without learning. n is the number of the same value of ω i j ω i j. 4 Conclusion In this paper, we introduce a nonlinear function to the integrate-and-fire mechanism and investigate the influence of parameter α and β in this function on the SOC behaviors of the model. We also investigate the influence of the refractoryperiod, which biologic neuron possess, on the SOC behavior of the model. We find that the influence of the size of the refractoryperiod on the SOC is obvious. But when m > 3, the exponent τ of the power-law is basically invariant. Our work just tries to indicate some relations between the SOC behavior and associate memory process of the brain. It might provide an approach for analyzing the collective behavior of neuron population in the brain. Because there are many kinds of mechanisms in the brain, our model is only a very simple simulation of the brain and many details of neurobiology are ignored. There is still a lot of work to do. References [1] P. Bak, C. Tang, and K. Wiesenfield, Phys. Rev. A38 (1988) 364. [2] Z. Olami, S. Feder, and K. Christensen, Phys. Rev. Lett. 68 (1992) 1244; K. Christense and Z. Olami, Phys. Rev. A46 (1992) [3] P. Bak and K. Sneppen, Phys. Rev. Lett. 71 (1993) [4] K. Christensen, H. Flyvbjerg, and Z. Olami, Phys. Rev. Lett. 71 (1993) [5] T. Gisiger, Biol. Rev. 76 (2001) 161. [6] P. Bak, How Nature Works: the Science of Self-organized Criticality, Springer-Verlag, New York (1996). [7] SUN Jiu-Rong, The Basic Theory of the Brain, Peking University Press, Beijing (2001). [8] T. Kohonen, Proceedings of the IEEE 78 (1990) [9] ZHAO Xiao-Wei and CHEN Tian-Lun, Phys. Rev. E65 (2002) [10] CHEN Dan-Mei, et al., J. Phys A: Math. Gen. 28 (1995) [11] ZHAO Xiao-Wei and CHEN Tian-Lun, Commun. Theor. Phys. (Beijing, China) 40 (2003) 363. [12] J.W. Shuai, Phys. Rev. E56 (1997) 890.
Effects of Interactive Function Forms in a Self-Organized Critical Model Based on Neural Networks
Commun. Theor. Phys. (Beijing, China) 40 (2003) pp. 607 613 c International Academic Publishers Vol. 40, No. 5, November 15, 2003 Effects of Interactive Function Forms in a Self-Organized Critical Model
More informationSelf-organized Criticality and Synchronization in a Pulse-coupled Integrate-and-Fire Neuron Model Based on Small World Networks
Commun. Theor. Phys. (Beijing, China) 43 (2005) pp. 466 470 c International Academic Publishers Vol. 43, No. 3, March 15, 2005 Self-organized Criticality and Synchronization in a Pulse-coupled Integrate-and-Fire
More informationSpatial and Temporal Behaviors in a Modified Evolution Model Based on Small World Network
Commun. Theor. Phys. (Beijing, China) 42 (2004) pp. 242 246 c International Academic Publishers Vol. 42, No. 2, August 15, 2004 Spatial and Temporal Behaviors in a Modified Evolution Model Based on Small
More informationA Modified Earthquake Model Based on Generalized Barabási Albert Scale-Free
Commun. Theor. Phys. (Beijing, China) 46 (2006) pp. 1011 1016 c International Academic Publishers Vol. 46, No. 6, December 15, 2006 A Modified Earthquake Model Based on Generalized Barabási Albert Scale-Free
More informationSelf-organized Criticality in a Modified Evolution Model on Generalized Barabási Albert Scale-Free Networks
Commun. Theor. Phys. (Beijing, China) 47 (2007) pp. 512 516 c International Academic Publishers Vol. 47, No. 3, March 15, 2007 Self-organized Criticality in a Modified Evolution Model on Generalized Barabási
More informationNonlinear Dynamical Behavior in BS Evolution Model Based on Small-World Network Added with Nonlinear Preference
Commun. Theor. Phys. (Beijing, China) 48 (2007) pp. 137 142 c International Academic Publishers Vol. 48, No. 1, July 15, 2007 Nonlinear Dynamical Behavior in BS Evolution Model Based on Small-World Network
More informationSelf-organized criticality and the self-organizing map
PHYSICAL REVIEW E, VOLUME 63, 036130 Self-organized criticality and the self-organizing map John A. Flanagan Neural Networks Research Center, Helsinki University of Technology, P.O. Box 5400, FIN-02015
More informationarxiv: v1 [cond-mat.stat-mech] 6 Mar 2008
CD2dBS-v2 Convergence dynamics of 2-dimensional isotropic and anisotropic Bak-Sneppen models Burhan Bakar and Ugur Tirnakli Department of Physics, Faculty of Science, Ege University, 35100 Izmir, Turkey
More informationDynamical Synapses Give Rise to a Power-Law Distribution of Neuronal Avalanches
Dynamical Synapses Give Rise to a Power-Law Distribution of Neuronal Avalanches Anna Levina 3,4, J. Michael Herrmann 1,2, Theo Geisel 1,2,4 1 Bernstein Center for Computational Neuroscience Göttingen 2
More informationDiscussion About Nonlinear Time Series Prediction Using Least Squares Support Vector Machine
Commun. Theor. Phys. (Beijing, China) 43 (2005) pp. 1056 1060 c International Academic Publishers Vol. 43, No. 6, June 15, 2005 Discussion About Nonlinear Time Series Prediction Using Least Squares Support
More informationConsider the following spike trains from two different neurons N1 and N2:
About synchrony and oscillations So far, our discussions have assumed that we are either observing a single neuron at a, or that neurons fire independent of each other. This assumption may be correct in
More informationarxiv: v2 [cond-mat.stat-mech] 6 Jun 2010
Chaos in Sandpile Models Saman Moghimi-Araghi and Ali Mollabashi Physics department, Sharif University of Technology, P.O. Box 55-96, Tehran, Iran We have investigated the weak chaos exponent to see if
More informationData Mining Part 5. Prediction
Data Mining Part 5. Prediction 5.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline How the Brain Works Artificial Neural Networks Simple Computing Elements Feed-Forward Networks Perceptrons (Single-layer,
More informationMultiobjective Optimization of an Extremal Evolution Model
Multiobjective Optimization of an Extremal Evolution Model Mohamed Fathey Elettreby Mathematics Department, Faculty of Science, Mansoura University, Mansoura 35516, Egypt Reprint requests to M. F. E.;
More informationInformation Theory and Neuroscience II
John Z. Sun and Da Wang Massachusetts Institute of Technology October 14, 2009 Outline System Model & Problem Formulation Information Rate Analysis Recap 2 / 23 Neurons Neuron (denoted by j) I/O: via synapses
More informationHopfield Neural Network and Associative Memory. Typical Myelinated Vertebrate Motoneuron (Wikipedia) Topic 3 Polymers and Neurons Lecture 5
Hopfield Neural Network and Associative Memory Typical Myelinated Vertebrate Motoneuron (Wikipedia) PHY 411-506 Computational Physics 2 1 Wednesday, March 5 1906 Nobel Prize in Physiology or Medicine.
More informationRicepiles: Experiment and Models
Progress of Theoretical Physics Supplement No. 139, 2000 489 Ricepiles: Experiment and Models Mária Markošová ) Department of Computer Science and Engineering Faculty of Electrical Engineering and Information
More informationArtificial Neural Network
Artificial Neural Network Contents 2 What is ANN? Biological Neuron Structure of Neuron Types of Neuron Models of Neuron Analogy with human NN Perceptron OCR Multilayer Neural Network Back propagation
More informationSupporting Online Material for
www.sciencemag.org/cgi/content/full/319/5869/1543/dc1 Supporting Online Material for Synaptic Theory of Working Memory Gianluigi Mongillo, Omri Barak, Misha Tsodyks* *To whom correspondence should be addressed.
More informationIntroduction to Neural Networks
Introduction to Neural Networks What are (Artificial) Neural Networks? Models of the brain and nervous system Highly parallel Process information much more like the brain than a serial computer Learning
More informationAnalysis of second-harmonic generation microscopy under refractive index mismatch
Vol 16 No 11, November 27 c 27 Chin. Phys. Soc. 19-1963/27/16(11/3285-5 Chinese Physics and IOP Publishing Ltd Analysis of second-harmonic generation microscopy under refractive index mismatch Wang Xiang-Hui(
More informationComplex Systems Methods 10. Self-Organized Criticality (SOC)
Complex Systems Methods 10. Self-Organized Criticality (SOC) Eckehard Olbrich e.olbrich@gmx.de http://personal-homepages.mis.mpg.de/olbrich/complex systems.html Potsdam WS 2007/08 Olbrich (Leipzig) 18.01.2007
More informationBiological Modeling of Neural Networks:
Week 14 Dynamics and Plasticity 14.1 Reservoir computing - Review:Random Networks - Computing with rich dynamics Biological Modeling of Neural Networks: 14.2 Random Networks - stationary state - chaos
More informationNeural Networks and Fuzzy Logic Rajendra Dept.of CSE ASCET
Unit-. Definition Neural network is a massively parallel distributed processing system, made of highly inter-connected neural computing elements that have the ability to learn and thereby acquire knowledge
More informationCriticality in Earthquakes. Good or bad for prediction?
http://www.pmmh.espci.fr/~oramos/ Osvanny Ramos. Main projects & collaborators Slow crack propagation Cracks patterns L. Vanel, S. Ciliberto, S. Santucci, J-C. Géminard, J. Mathiesen IPG Strasbourg, Nov.
More informationAn Introductory Course in Computational Neuroscience
An Introductory Course in Computational Neuroscience Contents Series Foreword Acknowledgments Preface 1 Preliminary Material 1.1. Introduction 1.1.1 The Cell, the Circuit, and the Brain 1.1.2 Physics of
More informationBranching Process Approach to Avalanche Dynamics on Complex Networks
Journal of the Korean Physical Society, Vol. 44, No. 3, March 2004, pp. 633 637 Branching Process Approach to Avalanche Dynamics on Complex Networks D.-S. Lee, K.-I. Goh, B. Kahng and D. Kim School of
More informationÂngelo Cardoso 27 May, Symbolic and Sub-Symbolic Learning Course Instituto Superior Técnico
BIOLOGICALLY INSPIRED COMPUTER MODELS FOR VISUAL RECOGNITION Ângelo Cardoso 27 May, 2010 Symbolic and Sub-Symbolic Learning Course Instituto Superior Técnico Index Human Vision Retinal Ganglion Cells Simple
More informationArtificial Intelligence
Artificial Intelligence Jeff Clune Assistant Professor Evolving Artificial Intelligence Laboratory Announcements Be making progress on your projects! Three Types of Learning Unsupervised Supervised Reinforcement
More informationSynaptic dynamics. John D. Murray. Synaptic currents. Simple model of the synaptic gating variable. First-order kinetics
Synaptic dynamics John D. Murray A dynamical model for synaptic gating variables is presented. We use this to study the saturation of synaptic gating at high firing rate. Shunting inhibition and the voltage
More informationTIME-SEQUENTIAL SELF-ORGANIZATION OF HIERARCHICAL NEURAL NETWORKS. Ronald H. Silverman Cornell University Medical College, New York, NY 10021
709 TIME-SEQUENTIAL SELF-ORGANIZATION OF HIERARCHICAL NEURAL NETWORKS Ronald H. Silverman Cornell University Medical College, New York, NY 10021 Andrew S. Noetzel polytechnic University, Brooklyn, NY 11201
More informationSynchronization of a General Delayed Complex Dynamical Network via Adaptive Feedback
Synchronization of a General Delayed Complex Dynamical Network via Adaptive Feedback Qunjiao Zhang and Junan Lu College of Mathematics and Statistics State Key Laboratory of Software Engineering Wuhan
More informationDiscussion of Some Problems About Nonlinear Time Series Prediction Using ν-support Vector Machine
Commun. Theor. Phys. (Beijing, China) 48 (2007) pp. 117 124 c International Academic Publishers Vol. 48, No. 1, July 15, 2007 Discussion of Some Problems About Nonlinear Time Series Prediction Using ν-support
More informationInfluence of Criticality on 1/f α Spectral Characteristics of Cortical Neuron Populations
Influence of Criticality on 1/f α Spectral Characteristics of Cortical Neuron Populations Robert Kozma rkozma@memphis.edu Computational Neurodynamics Laboratory, Department of Computer Science 373 Dunn
More informationSongting Li. Applied Mathematics, Mathematical and Computational Neuroscience, Biophysics
Songting Li Contact Information Phone: +1-917-930-3505 email: songting@cims.nyu.edu homepage: http://www.cims.nyu.edu/ songting/ Address: Courant Institute, 251 Mercer Street, New York, NY, United States,
More informationThe Mixed States of Associative Memories Realize Unimodal Distribution of Dominance Durations in Multistable Perception
The Mixed States of Associative Memories Realize Unimodal Distribution of Dominance Durations in Multistable Perception Takashi Kanamaru Department of Mechanical Science and ngineering, School of Advanced
More informationAnalysis of Neural Networks with Chaotic Dynamics
Chaos, Solitonr & Fructals Vol. 3, No. 2, pp. 133-139, 1993 Printed in Great Britain @60-0779/93$6.00 + 40 0 1993 Pergamon Press Ltd Analysis of Neural Networks with Chaotic Dynamics FRANCOIS CHAPEAU-BLONDEAU
More informationThe Bak-Tang-Wiesenfeld sandpile model around the upper critical dimension
Phys. Rev. E 56, 518 (1997. 518 The Bak-Tang-Wiesenfeld sandpile model around the upper critical dimension S. Lübeck and K. D. Usadel Theoretische Tieftemperaturphysik, Gerhard-Mercator-Universität Duisburg,
More informationTime-delay feedback control in a delayed dynamical chaos system and its applications
Time-delay feedback control in a delayed dynamical chaos system and its applications Ye Zhi-Yong( ), Yang Guang( ), and Deng Cun-Bing( ) School of Mathematics and Physics, Chongqing University of Technology,
More informationarxiv:physics/ v1 [physics.bio-ph] 19 Feb 1999
Odor recognition and segmentation by coupled olfactory bulb and cortical networks arxiv:physics/9902052v1 [physics.bioph] 19 Feb 1999 Abstract Zhaoping Li a,1 John Hertz b a CBCL, MIT, Cambridge MA 02139
More informationHow do synapses transform inputs?
Neurons to networks How do synapses transform inputs? Excitatory synapse Input spike! Neurotransmitter release binds to/opens Na channels Change in synaptic conductance! Na+ influx E.g. AMA synapse! Depolarization
More informationARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92
ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92 BIOLOGICAL INSPIRATIONS Some numbers The human brain contains about 10 billion nerve cells (neurons) Each neuron is connected to the others through 10000
More informationCircular symmetry of solutions of the neural field equation
Neural Field Dynamics Circular symmetry of solutions of the neural field equation take 287 Hecke Dynamically Selforganized Max Punk Institute Symposium on Nonlinear Glacier Dynamics 2005 in Zermatt Neural
More informationDEVS Simulation of Spiking Neural Networks
DEVS Simulation of Spiking Neural Networks Rene Mayrhofer, Michael Affenzeller, Herbert Prähofer, Gerhard Höfer, Alexander Fried Institute of Systems Science Systems Theory and Information Technology Johannes
More informationPhase Desynchronization as a Mechanism for Transitions to High-Dimensional Chaos
Commun. Theor. Phys. (Beijing, China) 35 (2001) pp. 682 688 c International Academic Publishers Vol. 35, No. 6, June 15, 2001 Phase Desynchronization as a Mechanism for Transitions to High-Dimensional
More informationComputational Explorations in Cognitive Neuroscience Chapter 2
Computational Explorations in Cognitive Neuroscience Chapter 2 2.4 The Electrophysiology of the Neuron Some basic principles of electricity are useful for understanding the function of neurons. This is
More informationThe Sandpile Model on Random Apollonian Networks
1 The Sandpile Model on Random Apollonian Networks Massimo Stella Bak, Teng and Wiesenfel originally proposed a simple model of a system whose dynamics spontaneously drives, and then maintains it, at the
More informationOn self-organised criticality in one dimension
On self-organised criticality in one dimension Kim Christensen Imperial College ondon Department of Physics Prince Consort Road SW7 2BW ondon United Kingdom Abstract In critical phenomena, many of the
More informationLecture 14 Population dynamics and associative memory; stable learning
Lecture 14 Population dynamics and associative memory; stable learning -Introduction -Associative Memory -Dense networks (mean-ield) -Population dynamics and Associative Memory -Discussion Systems or computing
More informationLecture 4: Feed Forward Neural Networks
Lecture 4: Feed Forward Neural Networks Dr. Roman V Belavkin Middlesex University BIS4435 Biological neurons and the brain A Model of A Single Neuron Neurons as data-driven models Neural Networks Training
More informationControlling chaos in random Boolean networks
EUROPHYSICS LETTERS 20 March 1997 Europhys. Lett., 37 (9), pp. 597-602 (1997) Controlling chaos in random Boolean networks B. Luque and R. V. Solé Complex Systems Research Group, Departament de Fisica
More informationA Simple Model of Evolution with Variable System Size
A Simple Model of Evolution with Variable System Size Claus Wilke and Thomas Martinetz Institut für Neuroinformatik Ruhr-Universität Bochum (Submitted: ; Printed: September 28, 2001) A simple model of
More informationNo. 6 Determining the input dimension of a To model a nonlinear time series with the widely used feed-forward neural network means to fit the a
Vol 12 No 6, June 2003 cfl 2003 Chin. Phys. Soc. 1009-1963/2003/12(06)/0594-05 Chinese Physics and IOP Publishing Ltd Determining the input dimension of a neural network for nonlinear time series prediction
More informationArtificial Neural Network and Fuzzy Logic
Artificial Neural Network and Fuzzy Logic 1 Syllabus 2 Syllabus 3 Books 1. Artificial Neural Networks by B. Yagnanarayan, PHI - (Cover Topologies part of unit 1 and All part of Unit 2) 2. Neural Networks
More informationAn Improved F-Expansion Method and Its Application to Coupled Drinfel d Sokolov Wilson Equation
Commun. Theor. Phys. (Beijing, China) 50 (008) pp. 309 314 c Chinese Physical Society Vol. 50, No., August 15, 008 An Improved F-Expansion Method and Its Application to Coupled Drinfel d Sokolov Wilson
More informationSelf-Organized Criticality (SOC) Tino Duong Biological Computation
Self-Organized Criticality (SOC) Tino Duong Biological Computation Agenda Introduction Background material Self-Organized Criticality Defined Examples in Nature Experiments Conclusion SOC in a Nutshell
More informationLoopSOM: A Robust SOM Variant Using Self-Organizing Temporal Feedback Connections
LoopSOM: A Robust SOM Variant Using Self-Organizing Temporal Feedback Connections Rafael C. Pinto, Paulo M. Engel Instituto de Informática Universidade Federal do Rio Grande do Sul (UFRGS) P.O. Box 15.064
More informationOn the Dynamics of Delayed Neural Feedback Loops. Sebastian Brandt Department of Physics, Washington University in St. Louis
On the Dynamics of Delayed Neural Feedback Loops Sebastian Brandt Department of Physics, Washington University in St. Louis Overview of Dissertation Chapter 2: S. F. Brandt, A. Pelster, and R. Wessel,
More informationEffects of refractory periods in the dynamics of a diluted neural network
Effects of refractory periods in the dynamics of a diluted neural network F. A. Tamarit, 1, * D. A. Stariolo, 2, * S. A. Cannas, 2, *, and P. Serra 2, 1 Facultad de Matemática, Astronomía yfísica, Universidad
More informationTime correlations in self-organized criticality (SOC)
SMR.1676-8 8th Workshop on Non-Linear Dynamics and Earthquake Prediction 3-15 October, 2005 ------------------------------------------------------------------------------------------------------------------------
More informationAvalanches in Fractional Cascading
Avalanches in Fractional Cascading Angela Dai Advisor: Prof. Bernard Chazelle May 8, 2012 Abstract This paper studies the distribution of avalanches in fractional cascading, linking the behavior to studies
More informationarxiv:cond-mat/ v1 17 Aug 1994
Universality in the One-Dimensional Self-Organized Critical Forest-Fire Model Barbara Drossel, Siegfried Clar, and Franz Schwabl Institut für Theoretische Physik, arxiv:cond-mat/9408046v1 17 Aug 1994 Physik-Department
More informationMathematical Models of Dynamic Behavior of Individual Neural Networks of Central Nervous System
Mathematical Models of Dynamic Behavior of Individual Neural Networks of Central Nervous System Dimitra Despoina Pagania 1, Adam Adamopoulos 1,2 and Spiridon D. Likothanassis 1 1 Pattern Recognition Laboratory,
More informationIntroduction. Previous work has shown that AER can also be used to construct largescale networks with arbitrary, configurable synaptic connectivity.
Introduction The goal of neuromorphic engineering is to design and implement microelectronic systems that emulate the structure and function of the brain. Address-event representation (AER) is a communication
More information(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann
(Feed-Forward) Neural Networks 2016-12-06 Dr. Hajira Jabeen, Prof. Jens Lehmann Outline In the previous lectures we have learned about tensors and factorization methods. RESCAL is a bilinear model for
More information7 Recurrent Networks of Threshold (Binary) Neurons: Basis for Associative Memory
Physics 178/278 - David Kleinfeld - Winter 2019 7 Recurrent etworks of Threshold (Binary) eurons: Basis for Associative Memory 7.1 The network The basic challenge in associative networks, also referred
More informationSingle Neuron Dynamics for Retaining and Destroying Network Information?
Single Neuron Dynamics for Retaining and Destroying Network Information? Michael Monteforte, Tatjana Tchumatchenko, Wei Wei & F. Wolf Bernstein Center for Computational Neuroscience and Faculty of Physics
More informationTuning tuning curves. So far: Receptive fields Representation of stimuli Population vectors. Today: Contrast enhancment, cortical processing
Tuning tuning curves So far: Receptive fields Representation of stimuli Population vectors Today: Contrast enhancment, cortical processing Firing frequency N 3 s max (N 1 ) = 40 o N4 N 1 N N 5 2 s max
More informationProjective synchronization of a complex network with different fractional order chaos nodes
Projective synchronization of a complex network with different fractional order chaos nodes Wang Ming-Jun( ) a)b), Wang Xing-Yuan( ) a), and Niu Yu-Jun( ) a) a) School of Electronic and Information Engineering,
More informationNeural networks. Chapter 19, Sections 1 5 1
Neural networks Chapter 19, Sections 1 5 Chapter 19, Sections 1 5 1 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural networks Chapter 19, Sections 1 5 2 Brains 10
More informationTracking the State of the Hindmarsh-Rose Neuron by Using the Coullet Chaotic System Based on a Single Input
ISSN 1746-7659, England, UK Journal of Information and Computing Science Vol. 11, No., 016, pp.083-09 Tracking the State of the Hindmarsh-Rose Neuron by Using the Coullet Chaotic System Based on a Single
More informationVisual Selection and Attention Shifting Based on FitzHugh-Nagumo Equations
Visual Selection and Attention Shifting Based on FitzHugh-Nagumo Equations Haili Wang, Yuanhua Qiao, Lijuan Duan, Faming Fang, Jun Miao 3, and Bingpeng Ma 3 College of Applied Science, Beijing University
More informationHopfield Networks. (Excerpt from a Basic Course at IK 2008) Herbert Jaeger. Jacobs University Bremen
Hopfield Networks (Excerpt from a Basic Course at IK 2008) Herbert Jaeger Jacobs University Bremen Building a model of associative memory should be simple enough... Our brain is a neural network Individual
More informationBalance of Electric and Diffusion Forces
Balance of Electric and Diffusion Forces Ions flow into and out of the neuron under the forces of electricity and concentration gradients (diffusion). The net result is a electric potential difference
More informationMathematical Models of Dynamic Behavior of Individual Neural Networks of Central Nervous System
Mathematical Models of Dynamic Behavior of Individual Neural Networks of Central Nervous System Dimitra-Despoina Pagania,*, Adam Adamopoulos,2, and Spiridon D. Likothanassis Pattern Recognition Laboratory,
More informationON SELF-ORGANIZED CRITICALITY AND SYNCHRONIZATION IN LATTICE MODELS OF COUPLED DYNAMICAL SYSTEMS
International Journal of Modern Physics B, c World Scientific Publishing Company ON SELF-ORGANIZED CRITICALITY AND SYNCHRONIZATION IN LATTICE MODELS OF COUPLED DYNAMICAL SYSTEMS CONRAD J. PÉREZ, ÁLVARO
More informationQuasi-Stationary Simulation: the Subcritical Contact Process
Brazilian Journal of Physics, vol. 36, no. 3A, September, 6 685 Quasi-Stationary Simulation: the Subcritical Contact Process Marcelo Martins de Oliveira and Ronald Dickman Departamento de Física, ICEx,
More informationAsynchronous updating of threshold-coupled chaotic neurons
PRAMANA c Indian Academy of Sciences Vol. 70, No. 6 journal of June 2008 physics pp. 1127 1134 Asynchronous updating of threshold-coupled chaotic neurons MANISH DEV SHRIMALI 1,2,3,, SUDESHNA SINHA 4 and
More informationProbabilistic Models in Theoretical Neuroscience
Probabilistic Models in Theoretical Neuroscience visible unit Boltzmann machine semi-restricted Boltzmann machine restricted Boltzmann machine hidden unit Neural models of probabilistic sampling: introduction
More informationNeuron. Detector Model. Understanding Neural Components in Detector Model. Detector vs. Computer. Detector. Neuron. output. axon
Neuron Detector Model 1 The detector model. 2 Biological properties of the neuron. 3 The computational unit. Each neuron is detecting some set of conditions (e.g., smoke detector). Representation is what
More informationStatistical Properties of a Ring Laser with Injected Signal and Backscattering
Commun. Theor. Phys. (Beijing, China) 35 (2001) pp. 87 92 c International Academic Publishers Vol. 35, No. 1, January 15, 2001 Statistical Properties of a Ring Laser with Injected Signal and Backscattering
More informationLocalized Excitations in Networks of Spiking Neurons
Localized Excitations in Networks of Spiking Neurons Hecke Schrobsdorff Bernstein Center for Computational Neuroscience Göttingen Max Planck Institute for Dynamics and Self-Organization Seminar: Irreversible
More informationBifurcation control and chaos in a linear impulsive system
Vol 8 No 2, December 2009 c 2009 Chin. Phys. Soc. 674-056/2009/82)/5235-07 Chinese Physics B and IOP Publishing Ltd Bifurcation control and chaos in a linear impulsive system Jiang Gui-Rong 蒋贵荣 ) a)b),
More informationBiological Modeling of Neural Networks
Week 4 part 2: More Detail compartmental models Biological Modeling of Neural Networks Week 4 Reducing detail - Adding detail 4.2. Adding detail - apse -cable equat Wulfram Gerstner EPFL, Lausanne, Switzerland
More informationShort Term Memory and Pattern Matching with Simple Echo State Networks
Short Term Memory and Pattern Matching with Simple Echo State Networks Georg Fette (fette@in.tum.de), Julian Eggert (julian.eggert@honda-ri.de) Technische Universität München; Boltzmannstr. 3, 85748 Garching/München,
More informationSPSS, University of Texas at Arlington. Topics in Machine Learning-EE 5359 Neural Networks
Topics in Machine Learning-EE 5359 Neural Networks 1 The Perceptron Output: A perceptron is a function that maps D-dimensional vectors to real numbers. For notational convenience, we add a zero-th dimension
More informationSynchronization and Bifurcation Analysis in Coupled Networks of Discrete-Time Systems
Commun. Theor. Phys. (Beijing, China) 48 (2007) pp. 871 876 c International Academic Publishers Vol. 48, No. 5, November 15, 2007 Synchronization and Bifurcation Analysis in Coupled Networks of Discrete-Time
More informationarxiv:cond-mat/ v1 7 May 1996
Stability of Spatio-Temporal Structures in a Lattice Model of Pulse-Coupled Oscillators A. Díaz-Guilera a, A. Arenas b, A. Corral a, and C. J. Pérez a arxiv:cond-mat/9605042v1 7 May 1996 Abstract a Departament
More informationBidirectional Partial Generalized Synchronization in Chaotic and Hyperchaotic Systems via a New Scheme
Commun. Theor. Phys. (Beijing, China) 45 (2006) pp. 1049 1056 c International Academic Publishers Vol. 45, No. 6, June 15, 2006 Bidirectional Partial Generalized Synchronization in Chaotic and Hyperchaotic
More informationHow do biological neurons learn? Insights from computational modelling of
How do biological neurons learn? Insights from computational modelling of neurobiological experiments Lubica Benuskova Department of Computer Science University of Otago, New Zealand Brain is comprised
More informationAction Potentials and Synaptic Transmission Physics 171/271
Action Potentials and Synaptic Transmission Physics 171/271 Flavio Fröhlich (flavio@salk.edu) September 27, 2006 In this section, we consider two important aspects concerning the communication between
More informationNervous Tissue. Neurons Electrochemical Gradient Propagation & Transduction Neurotransmitters Temporal & Spatial Summation
Nervous Tissue Neurons Electrochemical Gradient Propagation & Transduction Neurotransmitters Temporal & Spatial Summation What is the function of nervous tissue? Maintain homeostasis & respond to stimuli
More informationScale-free network of earthquakes
Scale-free network of earthquakes Sumiyoshi Abe 1 and Norikazu Suzuki 2 1 Institute of Physics, University of Tsukuba, Ibaraki 305-8571, Japan 2 College of Science and Technology, Nihon University, Chiba
More informationPredicting Synchrony in Heterogeneous Pulse Coupled Oscillators
Predicting Synchrony in Heterogeneous Pulse Coupled Oscillators Sachin S. Talathi 1, Dong-Uk Hwang 1, Abraham Miliotis 1, Paul R. Carney 1, and William L. Ditto 1 1 J Crayton Pruitt Department of Biomedical
More informationRational Form Solitary Wave Solutions and Doubly Periodic Wave Solutions to (1+1)-Dimensional Dispersive Long Wave Equation
Commun. Theor. Phys. (Beijing, China) 43 (005) pp. 975 98 c International Academic Publishers Vol. 43, No. 6, June 15, 005 Rational Form Solitary Wave Solutions and Doubly Periodic Wave Solutions to (1+1)-Dimensional
More informationHierarchy. Will Penny. 24th March Hierarchy. Will Penny. Linear Models. Convergence. Nonlinear Models. References
24th March 2011 Update Hierarchical Model Rao and Ballard (1999) presented a hierarchical model of visual cortex to show how classical and extra-classical Receptive Field (RF) effects could be explained
More information7 Rate-Based Recurrent Networks of Threshold Neurons: Basis for Associative Memory
Physics 178/278 - David Kleinfeld - Fall 2005; Revised for Winter 2017 7 Rate-Based Recurrent etworks of Threshold eurons: Basis for Associative Memory 7.1 A recurrent network with threshold elements The
More informationCh.8 Neural Networks
Ch.8 Neural Networks Hantao Zhang http://www.cs.uiowa.edu/ hzhang/c145 The University of Iowa Department of Computer Science Artificial Intelligence p.1/?? Brains as Computational Devices Motivation: Algorithms
More informationCausality and communities in neural networks
Causality and communities in neural networks Leonardo Angelini, Daniele Marinazzo, Mario Pellicoro, Sebastiano Stramaglia TIRES-Center for Signal Detection and Processing - Università di Bari, Bari, Italy
More informationNerve Signal Conduction. Resting Potential Action Potential Conduction of Action Potentials
Nerve Signal Conduction Resting Potential Action Potential Conduction of Action Potentials Resting Potential Resting neurons are always prepared to send a nerve signal. Neuron possesses potential energy
More information