(a) (b) (c) Time Time. Time

Similar documents
T sg. α c (0)= T=1/β. α c (T ) α=p/n

Stochastic Learning in a Neural Network with Adapting. Synapses. Istituto Nazionale di Fisica Nucleare, Sezione di Bari

Learning and Memory in Neural Networks

When is an Integrate-and-fire Neuron like a Poisson Neuron?

Chaotic Balanced State in a Model of Cortical Circuits C. van Vreeswijk and H. Sompolinsky Racah Institute of Physics and Center for Neural Computatio

in a Chaotic Neural Network distributed randomness of the input in each neuron or the weight in the

High-conductance states in a mean-eld cortical network model

Analysis of Neural Networks with Chaotic Dynamics

The Variance of Covariance Rules for Associative Matrix Memories and Reinforcement Learning

F.P. Battaglia 1. Istituto di Fisica. Universita di Roma, La Sapienza, Ple Aldo Moro, Roma and. S. Fusi

ESANN'1999 proceedings - European Symposium on Artificial Neural Networks Bruges (Belgium), April 1999, D-Facto public., ISBN X, pp.

Storage Capacity of Letter Recognition in Hopfield Networks

Shigetaka Fujita. Rokkodai, Nada, Kobe 657, Japan. Haruhiko Nishimura. Yashiro-cho, Kato-gun, Hyogo , Japan. Abstract

Properties of Associative Memory Model with the β-th-order Synaptic Decay

Stochastic Wilson-Cowan equations for networks of excitatory and inhibitory neurons II

Stochastic Dynamics of Learning with Momentum. in Neural Networks. Department of Medical Physics and Biophysics,

An Introductory Course in Computational Neuroscience

In Advances in Neural Information Processing Systems 6. J. D. Cowan, G. Tesauro and. Convergence of Indirect Adaptive. Andrew G.

of the dynamics. There is a competition between the capacity of the network and the stability of the

Supporting Online Material for

Influence of Criticality on 1/f α Spectral Characteristics of Cortical Neuron Populations

Synaptic plasticity in neuromorphic hardware. Stefano Fusi Columbia University

Stochastic Oscillator Death in Globally Coupled Neural Systems

Hopfield Neural Network

HOPFIELD neural networks (HNNs) are a class of nonlinear

arxiv:cond-mat/ v1 [cond-mat.dis-nn] 30 Sep 1999

Activity Driven Adaptive Stochastic. Resonance. Gregor Wenning and Klaus Obermayer. Technical University of Berlin.

Artificial Intelligence Hopfield Networks

Fast pruning using principal components

A Dynamical Implementation of Self-organizing Maps. Wolfgang Banzhaf. Department of Computer Science, University of Dortmund, Germany.

Neuronal Tuning: To Sharpen or Broaden?

Memory capacity of networks with stochastic binary synapses

Dynamical Systems and Deep Learning: Overview. Abbas Edalat

Support Vector Machines vs Multi-Layer. Perceptron in Particle Identication. DIFI, Universita di Genova (I) INFN Sezione di Genova (I) Cambridge (US)

Effects of Interactive Function Forms in a Self-Organized Critical Model Based on Neural Networks

Approximate Optimal-Value Functions. Satinder P. Singh Richard C. Yee. University of Massachusetts.

arxiv: v1 [cond-mat.dis-nn] 9 May 2008


Introduction to Neural Networks

Coherence detection in a spiking neuron via Hebbian learning

Equivalence of Backpropagation and Contrastive Hebbian Learning in a Layered Network

Neural variability and Poisson statistics

Hopfield Neural Network and Associative Memory. Typical Myelinated Vertebrate Motoneuron (Wikipedia) Topic 3 Polymers and Neurons Lecture 5

Neural spike statistics modify the impact of background noise

Memories Associated with Single Neurons and Proximity Matrices

Effects of Interactive Function Forms and Refractoryperiod in a Self-Organized Critical Model Based on Neural Networks

The Bias-Variance dilemma of the Monte Carlo. method. Technion - Israel Institute of Technology, Technion City, Haifa 32000, Israel

Entrainment and Chaos in the Hodgkin-Huxley Oscillator

Biological Modeling of Neural Networks:

Akinori Sekiguchi and Yoshihiko Nakamura. Dept. of Mechano-Informatics, University of Tokyo Hongo, Bunkyo-Ku, Tokyo , Japan

W (x) W (x) (b) (a) 1 N

OTHER SYNCHRONIZATION EXAMPLES IN NATURE 1667 Christiaan Huygens: synchronization of pendulum clocks hanging on a wall networks of coupled Josephson j

Probabilistic Models in Theoretical Neuroscience

a subset of these N input variables. A naive method is to train a new neural network on this subset to determine this performance. Instead of the comp

MODEL NEURONS: FROM HODGKIN-HUXLEY TO HOPFIELD. L.F. Abbott and Thomas B. Kepler. Physics and Biology Departments. and. Center for Complex Systems

Power Calculations for Preclinical Studies Using a K-Sample Rank Test and the Lehmann Alternative Hypothesis

898 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 17, NO. 6, DECEMBER X/01$ IEEE

Quasi-Stationary Simulation: the Subcritical Contact Process

Viewpoint invariant face recognition using independent component analysis and attractor networks

Information Capacity of Binary Weights Associative. Memories. Mathematical Sciences. University at Bualo. Bualo NY

Problem Set Number 02, j/2.036j MIT (Fall 2018)

Learning in Boltzmann Trees. Lawrence Saul and Michael Jordan. Massachusetts Institute of Technology. Cambridge, MA January 31, 1995.

Model of a Biological Neuron as a Temporal Neural Network

In#uence of dendritic topologyon "ring patterns in model neurons

Finite-temperature magnetism of ultrathin lms and nanoclusters PhD Thesis Booklet. Levente Rózsa Supervisor: László Udvardi

Associative Memories (I) Hopfield Networks

Iterative procedure for multidimesional Euler equations Abstracts A numerical iterative scheme is suggested to solve the Euler equations in two and th

Methods for Estimating the Computational Power and Generalization Capability of Neural Microcircuits

Functional optimization in complex excitable networks

STOCHASTIC PROCESSES IN PHYSICS AND CHEMISTRY

Batch-mode, on-line, cyclic, and almost cyclic learning 1 1 Introduction In most neural-network applications, learning plays an essential role. Throug

A Robust PCA by LMSER Learning with Iterative Error. Bai-ling Zhang Irwin King Lei Xu.

Robust linear optimization under general norms

Controllingthe spikingactivity in excitable membranes via poisoning

The wake-sleep algorithm for unsupervised neural networks

Mathematics Research Report No. MRR 003{96, HIGH RESOLUTION POTENTIAL FLOW METHODS IN OIL EXPLORATION Stephen Roberts 1 and Stephan Matthai 2 3rd Febr

Boxlets: a Fast Convolution Algorithm for. Signal Processing and Neural Networks. Patrice Y. Simard, Leon Bottou, Patrick Haner and Yann LeCun

7 Rate-Based Recurrent Networks of Threshold Neurons: Basis for Associative Memory

Mapping Closure Approximation to Conditional Dissipation Rate for Turbulent Scalar Mixing

Effects of refractory periods in the dynamics of a diluted neural network

Spurious Chaotic Solutions of Dierential. Equations. Sigitas Keras. September Department of Applied Mathematics and Theoretical Physics

Stationary Bumps in Networks of Spiking Neurons

adap-org/ Jan 1994

Neural Nets and Symbolic Reasoning Hopfield Networks

Field indeced pattern simulation and spinodal. point in nematic liquid crystals. Chun Zheng Frank Lonberg Robert B. Meyer. June 18, 1995.

Proceedings of Neural, Parallel, and Scientific Computations 4 (2010) xx-xx PHASE OSCILLATOR NETWORK WITH PIECEWISE-LINEAR DYNAMICS

Schiestel s Derivation of the Epsilon Equation and Two Equation Modeling of Rotating Turbulence

1 Introduction Tasks like voice or face recognition are quite dicult to realize with conventional computer systems, even for the most powerful of them

Abstract. In this paper we propose recurrent neural networks with feedback into the input

λ-universe: Introduction and Preliminary Study

MODELS OF LEARNING AND THE POLAR DECOMPOSITION OF BOUNDED LINEAR OPERATORS

hidden -> output input -> hidden PCA 1 pruned 2 pruned pruned 6 pruned 7 pruned 8 9 pruned 10 pruned 11 pruned pruned 14 pruned 15

Reinforcement Learning, Neural Networks and PI Control Applied to a Heating Coil

Rate- and Phase-coded Autoassociative Memory

Error Empirical error. Generalization error. Time (number of iteration)

Force Field for Water Based on Neural Network

Bursting and Chaotic Activities in the Nonlinear Dynamics of FitzHugh-Rinzel Neuron Model

Lateral organization & computation

Basic Principles of Unsupervised and Unsupervised

Visual Selection and Attention Shifting Based on FitzHugh-Nagumo Equations

Transcription:

Baltzer Journals Stochastic Neurodynamics and the System Size Expansion Toru Ohira and Jack D. Cowan 2 Sony Computer Science Laboratory 3-4-3 Higashi-gotanda, Shinagawa, Tokyo 4, Japan E-mail: ohiracsl.sony.co.jp and 2 Departments of Mathematics and Neurology, The University of Chicago, Chicago, IL 6637 E-mail: cowansynapse.uchicago.edu We present here a method for the study of stochastic neurodynamics in the master equation framework. Our aim is to obtain a statistical description of the dynamics of uctuations and correlations of neural activity in large neural networks. We focus on a macroscopic description of the network via a master equation for the number of active neurons in the network. We present a systematic expansion of this equation using the \system size expansion". We obtain coupled dynamical equations for the average activity and of uctuations around this average. These equations exhibit non{monotonic approaches to equilibrium, as seen in Monte Carlo simulations. Keywords: stochastic neurodynamics, master equation, system size expansion. Introduction The correlated ring of neurons is considered to be an integral part of information processing in the brain[2, 2]. Experimentally, cross{correlations are used to study synaptic interactions between neurons and to probe for synchronous network activity. In theoretical studies of stochastic neural networks, understanding the dynamics of correlated neural activity requires one to go beyond the mean eld approximation that neglects correlations in non{ equilibrium states[8, 6]. In other words, we need to go beyond the simple mean{eld approximation to study the eects of uctuations about average ring activities. Recently, we have analyzed stochastic neurodynamics using a master equation[5, 8]. A network comprising binary neurons with asynchronous stochastic dynamics

T.Ohira and J.D.Cowan / Stochastic Neurodynamics 2 is considered, and a master equation is written in \second quantized form" to take advantage of the theoretical tools that then become available for its analysis. A hierarchy of moment equations is obtained and a heuristic closure at the level of second moment equations is introduced. Another approach based on the master equation via path integrals, and the extension to neurons with a refractory state are discussed in [9, ]. In this paper, we introduce another master equation based approach to go beyond the mean eld approximation. We concentrate on the macroscopic behavior of a network of two{state neurons, and introduce a master equation for the number of active neurons in the network at time t. We use a more systematic expansion of the master equation than hitherto, the \system size expansion"[]. The expansion parameter is the inverse of the total number of the neurons in the network. We truncate the expansion at second order and obtain an equation for uctuations about the mean number of active neurons, which is itself coupled to the equation for the average number of active neurons at time t. These equations show non{monotonic approaches to equilibrium values near critical points, a feature which is not seen in the mean eld approximation. Monte Carlo simulations of the master equation itself show qualitatively similar non{monotonic behavior. 2 Master Equation and the System Size Expansion We rst construct a master equation for a network comprising N binary elements with two states, \active" or ring and \quiescent" or non{ring. The transitions between these states are probabilistic and we assume that the transition rate from active to quiescent is a constant for every neuron in the network. We do not make any special assumption about network connectivity, but assume that it is \homogeneous", i.e., all neurons are statistically equivalent with respect to their activities, which depend only on the proportion of active neurons in the network. More specically, the transition rate from quiescent to active is given as a function of the number of active neurons in the network. Taking the ring time to be about 2ms, we have 5s?. For the transition rate from quiescent to active, the range of the function is 3? s? reecting empirically observed ring{ rates of cortical neurons. With these assumptions, one can write a master equation as follows.? t P N [n; t] = (np N [n; t]? (n + )P N [(n + ); t]) + N(? n N )( n N )P N[n; t]? N(? (n? ) N )( n? N )P N [(n? ); t]; ()

T.Ohira and J.D.Cowan / Stochastic Neurodynamics 3 where P N [n; t] is the probability that the number of active neurons is n at time t. (We absorbed the parameter representing total synaptic weight into the function.) This master equation can be deduced from the second quantized form cited earlier, which will be discussed elsewhere. The standard form of this equation can be rewritten by introducing the \step operator", dened by the following action on an arbitrary function of n: Ef(n) = f(n + ); E? f(n) = f(n? ) (2) In eect, E and E? shift n by one. Using such step operators, Eq. () becomes t P N [n; t] = (E? )r n P N [n; t] + (E?? )g n P N [n; t]; (3) where r n = n, and g n = (N? n)( n ). This master equation is non{linear since N g n is a nonlinear function of n. Linear master equations, in which both r n and g n are linear functions of n, can be solved exactly. However, in general, non{linear master equations cannot be solved exactly, so in our case, we seek an approximate solution. We now expand the master equation to obtain approximate equations for the stochastic dynamics of the network. We use the system size expansion, which is closely related to the Kramers{Moyal expansion, to obtain the \macroscopic equation" and time{dependent approximations to uctuations about the solutions of such equation. In essence, this method is a way to expand master equations in powers of a small parameter, which is usually identied as the inverse size of the system. Here, we identify the system size with the total number of neurons in a network. We make a change of variables in the master equation given in (3). We assume that uctuations about the macroscopic value of n are of order N (=2). In other words, we expect that P N (n; t) will have a maximum around the macroscopic value of n with a width of order N (=2). Hence, we set n(t) = N (t) + N 2 (t) (4) where satises the macroscopic equation and is a new variable, whose distribution is equivalent to P N (n; t), i.e., P N (n; t) = (; t). We expand the step operators as: E = + N? 2 2 + N? 2... ; E? =? N? 2 2 + N? 2... (5) as E shifts by + N (?=2). With this change of variables, the master equation is given as follows: t (; t)? N 2 t (t) (; t) = N(N? 2 + 2 N? 2 2...)[( + N? 2 )] +N(?N? 2 + 2 2 N?...)[ 2? ( + N? 2 )][( + N? 2 )](; t) (6)

T.Ohira and J.D.Cowan / Stochastic Neurodynamics 4 Collecting terms, we obtain, to order N (=2),? t =? (? )() (7) This is the macroscopic equation, which can be obtained also by using a mean{ led approximation to the master equation. We make satisfy this equation so that terms of order N (=2) vanish. The next order is N (), which gives a Fokker{Planck equation for the uctuation ( () x (x)j x=): t =? [?? () + (? ()] + 2 ) 2 2 [ + (? )()] (8) We note that this equation does not depend on the variable N justifying our assumption that uctuations are of order N (=2). We now study the behavior of the equations obtained through the system size expansion to the second order. From Eqs. (7) and (8), we obtain d dt =? + (? )(? ) + (? + ) (? ) (9) d dt =?? (? ) + (? + ) (? ) () where n = N +, and = N? 2. Equations (9) and () can be numerically integrated. Some examples are shown in Figure (B). For comparison, we plot solutions of the macroscopic equations with the same parameter sets in Figure (A). We observe a physically expected bifurcation into an active network state with decreasing, for either approximation. However, dierent dynamics are seen as one approaches bifurcation points. In particular, the coupled{equations exhibit a non{monotonic approach to the limiting value. The validity of the system size expansion is limited to the region not close to the bifurcation point, as discussed in the last section. The point is that by incorporating higher order terms into the approximation, we can extend its validity to a domain closer to the bifurcation point and thereby better capture stochastic dynamical behavior of such networks. Monte Carlo simulations[3] of the two dimensional network based on () with 25 neurons with periodic boundary conditions were performed. The connectivity is set up as follows: a neuron is connected to a specied number k of other neurons chosen randomly from the network. The strength of connection is given by the Poisson form: w ij = w r ij s s! e?r ij () where r ij is the distance between two neurons, and w and s are constants. We show in Figure 2 the average behavior of for (A) k = 2 (k=n = :8) and (B)

T.Ohira and J.D.Cowan / Stochastic Neurodynamics 5 k = 5m (k=n = :6). The non{monotonic dynamics is more noticeable in the low connectivity network. More quantitative comparisons between simulations and theory will be carried out in the future. The qualitative comparison shown here, however, indicate the need to model uctuations of total activity near critical points in order to capture the dynamics of sparsely connected networks. This is consistent with our earlier investigations of a one{dimensional ring of neurons via a master equation. A (a) {} {93} (b) {.9} (c) 2 4 6 8 2 4 2 4 6 8 2 4 2 4 6 8 2 4 B (a) (b) (c) 2 4 6 8 2 4 2 4 6 8 2 4 2 4 6 8 2 4 Figure : Comparison of solutions of (A) the macroscopic equation, and (B) (23) and (24). The parameters are set at = 5, = :5 and = (a), (b) 93, and (c).9. The initial conditions are = :5, and =(A)., and (B). A.5 (a). (b) (c) 2 4 6 8 2 4 2 4 6 8 2 4 2 4 6 8 2 4 B. (a).5 (b) (c) 2 3 4 5 6 2 3 4 5 6 2 3 4 5 6 Figure 2: Comparisons of Monte Carlo simulations of the master equation with high (k = 2) and low (k = 5) connectivities per neuron. The parameters are set at = 5, = :5, w = :, s = 3:, and for (A) = (a).5, (b)., and (c), and for (B) = (a)., (b).5, and (c). The initial condition is a random conguration.

T.Ohira and J.D.Cowan / Stochastic Neurodynamics 6 3 Discussion We have here outlined an application of the system size expansion to a master equation for stochastic neural network activity. It produced a dynamical equation for the uctuations about mean activity levels, the solutions of which showed a non{monotonic approach to such levels near a critical point. This has been seen in model networks with low connectivity. Two issues raised by this approach require further comment: () In this work we have used the number of neurons in the network as a expansion parameter. Given the observation that the overall connectedness aects the stochastic dynamics, a parameter representing the average connectivity per neuron may be better suited as an expansion parameter. We note that this parameter is typically small for biological neural networks. (2) There are many studies of Hebbian learning in neural networks[, 4, 7]. In such studies attempts have also been made to incorporate correlations of neural activities. It is of interest to see if we can formulate such attempts within the framework presented here. References [] S. Amari, K. Magiu, Statistical neurodynamics of associative memory, Neural Networks, (988) 63{73. [2] D. J. Amit, N. Brunel, M. V. Tsodyks, Correlations of cortical Hebbian reverberations: theory versus experiment, J. of Neuroscience, 4 (994) 6435{6445. [3] K. Binder, Introduction: Theory and \technical" aspects of Monte Carlo simulations. Monte Carlo Methods in Statistical Physics, 2nd Ed., Springer-Verlag, Berlin, 986. [4] A. C. C. Coolen, D. Sherrington, Dynamics of fully connected attractor neural networks near saturation, Phys. Rev. Lett., 7 (994) 3886{3889. [5] J. D. Cowan, Stochastic neurodynamics in Advances in Neural Information Processing Systems 3, R. P. Lippman, J. E. Moody, and D. S. Toretzky, eds., Morgan Kaufmann Publishers, San Mateo, 99. [6] I. Ginzburg, H. Sompolinsky, Theory of correlation in stochastic neural networks, Phys. Rev. E, 5 (995) 37{39. [7] H. Nishimori, T. Ozeki, Retrieval dynamics of associative memory of the Hopeld type, J. Phys. A: Math. Gen., 26 (993) 859{87. [8] T. Ohira, J. D. Cowan, Master equation approach to stochastic neurodynamics, Phys. Rev. E, 48 (993) 2259{2266. [9] T. Ohira, J. D. Cowan, Feynman diagrams for stochastic neurodynamics in Proceedings of Fifth Australian Conference of Neural Networks, Brisbane, 994. [] Ohira, T., and Cowan, J. D. 995. Stochastic dynamics of three{state neural networks in Advances in Neural Information Processing Systems 7, G. Tesauro, D. S. Toretzky, T. K. Leen, eds., MIT Press, Cambridge, 995. [] N. G. van Kampen, Stochastic Processes in Physics and Chemistry. North{Holland, Amsterdam, 992. [2] D. Wang, J. Buhmann, C. von der Malsburg, Pattern segmentation in associative memory, Neural Computation, 2 (99) 94{6.