(a) (b) (c) Time Time. Time
|
|
- Dina Douglas
- 5 years ago
- Views:
Transcription
1 Baltzer Journals Stochastic Neurodynamics and the System Size Expansion Toru Ohira and Jack D. Cowan 2 Sony Computer Science Laboratory Higashi-gotanda, Shinagawa, Tokyo 4, Japan ohiracsl.sony.co.jp and 2 Departments of Mathematics and Neurology, The University of Chicago, Chicago, IL cowansynapse.uchicago.edu We present here a method for the study of stochastic neurodynamics in the master equation framework. Our aim is to obtain a statistical description of the dynamics of uctuations and correlations of neural activity in large neural networks. We focus on a macroscopic description of the network via a master equation for the number of active neurons in the network. We present a systematic expansion of this equation using the \system size expansion". We obtain coupled dynamical equations for the average activity and of uctuations around this average. These equations exhibit non{monotonic approaches to equilibrium, as seen in Monte Carlo simulations. Keywords: stochastic neurodynamics, master equation, system size expansion. Introduction The correlated ring of neurons is considered to be an integral part of information processing in the brain[2, 2]. Experimentally, cross{correlations are used to study synaptic interactions between neurons and to probe for synchronous network activity. In theoretical studies of stochastic neural networks, understanding the dynamics of correlated neural activity requires one to go beyond the mean eld approximation that neglects correlations in non{ equilibrium states[8, 6]. In other words, we need to go beyond the simple mean{eld approximation to study the eects of uctuations about average ring activities. Recently, we have analyzed stochastic neurodynamics using a master equation[5, 8]. A network comprising binary neurons with asynchronous stochastic dynamics
2 T.Ohira and J.D.Cowan / Stochastic Neurodynamics 2 is considered, and a master equation is written in \second quantized form" to take advantage of the theoretical tools that then become available for its analysis. A hierarchy of moment equations is obtained and a heuristic closure at the level of second moment equations is introduced. Another approach based on the master equation via path integrals, and the extension to neurons with a refractory state are discussed in [9, ]. In this paper, we introduce another master equation based approach to go beyond the mean eld approximation. We concentrate on the macroscopic behavior of a network of two{state neurons, and introduce a master equation for the number of active neurons in the network at time t. We use a more systematic expansion of the master equation than hitherto, the \system size expansion"[]. The expansion parameter is the inverse of the total number of the neurons in the network. We truncate the expansion at second order and obtain an equation for uctuations about the mean number of active neurons, which is itself coupled to the equation for the average number of active neurons at time t. These equations show non{monotonic approaches to equilibrium values near critical points, a feature which is not seen in the mean eld approximation. Monte Carlo simulations of the master equation itself show qualitatively similar non{monotonic behavior. 2 Master Equation and the System Size Expansion We rst construct a master equation for a network comprising N binary elements with two states, \active" or ring and \quiescent" or non{ring. The transitions between these states are probabilistic and we assume that the transition rate from active to quiescent is a constant for every neuron in the network. We do not make any special assumption about network connectivity, but assume that it is \homogeneous", i.e., all neurons are statistically equivalent with respect to their activities, which depend only on the proportion of active neurons in the network. More specically, the transition rate from quiescent to active is given as a function of the number of active neurons in the network. Taking the ring time to be about 2ms, we have 5s?. For the transition rate from quiescent to active, the range of the function is 3? s? reecting empirically observed ring{ rates of cortical neurons. With these assumptions, one can write a master equation as follows.? t P N [n; t] = (np N [n; t]? (n + )P N [(n + ); t]) + N(? n N )( n N )P N[n; t]? N(? (n? ) N )( n? N )P N [(n? ); t]; ()
3 T.Ohira and J.D.Cowan / Stochastic Neurodynamics 3 where P N [n; t] is the probability that the number of active neurons is n at time t. (We absorbed the parameter representing total synaptic weight into the function.) This master equation can be deduced from the second quantized form cited earlier, which will be discussed elsewhere. The standard form of this equation can be rewritten by introducing the \step operator", dened by the following action on an arbitrary function of n: Ef(n) = f(n + ); E? f(n) = f(n? ) (2) In eect, E and E? shift n by one. Using such step operators, Eq. () becomes t P N [n; t] = (E? )r n P N [n; t] + (E?? )g n P N [n; t]; (3) where r n = n, and g n = (N? n)( n ). This master equation is non{linear since N g n is a nonlinear function of n. Linear master equations, in which both r n and g n are linear functions of n, can be solved exactly. However, in general, non{linear master equations cannot be solved exactly, so in our case, we seek an approximate solution. We now expand the master equation to obtain approximate equations for the stochastic dynamics of the network. We use the system size expansion, which is closely related to the Kramers{Moyal expansion, to obtain the \macroscopic equation" and time{dependent approximations to uctuations about the solutions of such equation. In essence, this method is a way to expand master equations in powers of a small parameter, which is usually identied as the inverse size of the system. Here, we identify the system size with the total number of neurons in a network. We make a change of variables in the master equation given in (3). We assume that uctuations about the macroscopic value of n are of order N (=2). In other words, we expect that P N (n; t) will have a maximum around the macroscopic value of n with a width of order N (=2). Hence, we set n(t) = N (t) + N 2 (t) (4) where satises the macroscopic equation and is a new variable, whose distribution is equivalent to P N (n; t), i.e., P N (n; t) = (; t). We expand the step operators as: E = + N? N? 2... ; E? =? N? N? 2... (5) as E shifts by + N (?=2). With this change of variables, the master equation is given as follows: t (; t)? N 2 t (t) (; t) = N(N? N? )[( + N? 2 )] +N(?N? N?...)[ 2? ( + N? 2 )][( + N? 2 )](; t) (6)
4 T.Ohira and J.D.Cowan / Stochastic Neurodynamics 4 Collecting terms, we obtain, to order N (=2),? t =? (? )() (7) This is the macroscopic equation, which can be obtained also by using a mean{ led approximation to the master equation. We make satisfy this equation so that terms of order N (=2) vanish. The next order is N (), which gives a Fokker{Planck equation for the uctuation ( () x (x)j x=): t =? [?? () + (? ()] + 2 ) 2 2 [ + (? )()] (8) We note that this equation does not depend on the variable N justifying our assumption that uctuations are of order N (=2). We now study the behavior of the equations obtained through the system size expansion to the second order. From Eqs. (7) and (8), we obtain d dt =? + (? )(? ) + (? + ) (? ) (9) d dt =?? (? ) + (? + ) (? ) () where n = N +, and = N? 2. Equations (9) and () can be numerically integrated. Some examples are shown in Figure (B). For comparison, we plot solutions of the macroscopic equations with the same parameter sets in Figure (A). We observe a physically expected bifurcation into an active network state with decreasing, for either approximation. However, dierent dynamics are seen as one approaches bifurcation points. In particular, the coupled{equations exhibit a non{monotonic approach to the limiting value. The validity of the system size expansion is limited to the region not close to the bifurcation point, as discussed in the last section. The point is that by incorporating higher order terms into the approximation, we can extend its validity to a domain closer to the bifurcation point and thereby better capture stochastic dynamical behavior of such networks. Monte Carlo simulations[3] of the two dimensional network based on () with 25 neurons with periodic boundary conditions were performed. The connectivity is set up as follows: a neuron is connected to a specied number k of other neurons chosen randomly from the network. The strength of connection is given by the Poisson form: w ij = w r ij s s! e?r ij () where r ij is the distance between two neurons, and w and s are constants. We show in Figure 2 the average behavior of for (A) k = 2 (k=n = :8) and (B)
5 T.Ohira and J.D.Cowan / Stochastic Neurodynamics 5 k = 5m (k=n = :6). The non{monotonic dynamics is more noticeable in the low connectivity network. More quantitative comparisons between simulations and theory will be carried out in the future. The qualitative comparison shown here, however, indicate the need to model uctuations of total activity near critical points in order to capture the dynamics of sparsely connected networks. This is consistent with our earlier investigations of a one{dimensional ring of neurons via a master equation. A (a) {} {93} (b) {.9} (c) B (a) (b) (c) Figure : Comparison of solutions of (A) the macroscopic equation, and (B) (23) and (24). The parameters are set at = 5, = :5 and = (a), (b) 93, and (c).9. The initial conditions are = :5, and =(A)., and (B). A.5 (a). (b) (c) B. (a).5 (b) (c) Figure 2: Comparisons of Monte Carlo simulations of the master equation with high (k = 2) and low (k = 5) connectivities per neuron. The parameters are set at = 5, = :5, w = :, s = 3:, and for (A) = (a).5, (b)., and (c), and for (B) = (a)., (b).5, and (c). The initial condition is a random conguration.
6 T.Ohira and J.D.Cowan / Stochastic Neurodynamics 6 3 Discussion We have here outlined an application of the system size expansion to a master equation for stochastic neural network activity. It produced a dynamical equation for the uctuations about mean activity levels, the solutions of which showed a non{monotonic approach to such levels near a critical point. This has been seen in model networks with low connectivity. Two issues raised by this approach require further comment: () In this work we have used the number of neurons in the network as a expansion parameter. Given the observation that the overall connectedness aects the stochastic dynamics, a parameter representing the average connectivity per neuron may be better suited as an expansion parameter. We note that this parameter is typically small for biological neural networks. (2) There are many studies of Hebbian learning in neural networks[, 4, 7]. In such studies attempts have also been made to incorporate correlations of neural activities. It is of interest to see if we can formulate such attempts within the framework presented here. References [] S. Amari, K. Magiu, Statistical neurodynamics of associative memory, Neural Networks, (988) 63{73. [2] D. J. Amit, N. Brunel, M. V. Tsodyks, Correlations of cortical Hebbian reverberations: theory versus experiment, J. of Neuroscience, 4 (994) 6435{6445. [3] K. Binder, Introduction: Theory and \technical" aspects of Monte Carlo simulations. Monte Carlo Methods in Statistical Physics, 2nd Ed., Springer-Verlag, Berlin, 986. [4] A. C. C. Coolen, D. Sherrington, Dynamics of fully connected attractor neural networks near saturation, Phys. Rev. Lett., 7 (994) 3886{3889. [5] J. D. Cowan, Stochastic neurodynamics in Advances in Neural Information Processing Systems 3, R. P. Lippman, J. E. Moody, and D. S. Toretzky, eds., Morgan Kaufmann Publishers, San Mateo, 99. [6] I. Ginzburg, H. Sompolinsky, Theory of correlation in stochastic neural networks, Phys. Rev. E, 5 (995) 37{39. [7] H. Nishimori, T. Ozeki, Retrieval dynamics of associative memory of the Hopeld type, J. Phys. A: Math. Gen., 26 (993) 859{87. [8] T. Ohira, J. D. Cowan, Master equation approach to stochastic neurodynamics, Phys. Rev. E, 48 (993) 2259{2266. [9] T. Ohira, J. D. Cowan, Feynman diagrams for stochastic neurodynamics in Proceedings of Fifth Australian Conference of Neural Networks, Brisbane, 994. [] Ohira, T., and Cowan, J. D Stochastic dynamics of three{state neural networks in Advances in Neural Information Processing Systems 7, G. Tesauro, D. S. Toretzky, T. K. Leen, eds., MIT Press, Cambridge, 995. [] N. G. van Kampen, Stochastic Processes in Physics and Chemistry. North{Holland, Amsterdam, 992. [2] D. Wang, J. Buhmann, C. von der Malsburg, Pattern segmentation in associative memory, Neural Computation, 2 (99) 94{6.
T sg. α c (0)= T=1/β. α c (T ) α=p/n
Taejon, Korea, vol. 2 of 2, pp. 779{784, Nov. 2. Capacity Analysis of Bidirectional Associative Memory Toshiyuki Tanaka y, Shinsuke Kakiya y, and Yoshiyuki Kabashima z ygraduate School of Engineering,
More informationStochastic Learning in a Neural Network with Adapting. Synapses. Istituto Nazionale di Fisica Nucleare, Sezione di Bari
Stochastic Learning in a Neural Network with Adapting Synapses. G. Lattanzi 1, G. Nardulli 1, G. Pasquariello and S. Stramaglia 1 Dipartimento di Fisica dell'universita di Bari and Istituto Nazionale di
More informationLearning and Memory in Neural Networks
Learning and Memory in Neural Networks Guy Billings, Neuroinformatics Doctoral Training Centre, The School of Informatics, The University of Edinburgh, UK. Neural networks consist of computational units
More informationWhen is an Integrate-and-fire Neuron like a Poisson Neuron?
When is an Integrate-and-fire Neuron like a Poisson Neuron? Charles F. Stevens Salk Institute MNL/S La Jolla, CA 92037 cfs@salk.edu Anthony Zador Salk Institute MNL/S La Jolla, CA 92037 zador@salk.edu
More informationChaotic Balanced State in a Model of Cortical Circuits C. van Vreeswijk and H. Sompolinsky Racah Institute of Physics and Center for Neural Computatio
Chaotic Balanced State in a Model of Cortical Circuits C. van Vreeswij and H. Sompolinsy Racah Institute of Physics and Center for Neural Computation Hebrew University Jerusalem, 91904 Israel 10 March
More informationin a Chaotic Neural Network distributed randomness of the input in each neuron or the weight in the
Heterogeneity Enhanced Order in a Chaotic Neural Network Shin Mizutani and Katsunori Shimohara NTT Communication Science Laboratories, 2-4 Hikaridai, Seika-cho, Soraku-gun, Kyoto, 69-237 Japan shin@cslab.kecl.ntt.co.jp
More informationHigh-conductance states in a mean-eld cortical network model
Neurocomputing 58 60 (2004) 935 940 www.elsevier.com/locate/neucom High-conductance states in a mean-eld cortical network model Alexander Lerchner a;, Mandana Ahmadi b, John Hertz b a Oersted-DTU, Technical
More informationAnalysis of Neural Networks with Chaotic Dynamics
Chaos, Solitonr & Fructals Vol. 3, No. 2, pp. 133-139, 1993 Printed in Great Britain @60-0779/93$6.00 + 40 0 1993 Pergamon Press Ltd Analysis of Neural Networks with Chaotic Dynamics FRANCOIS CHAPEAU-BLONDEAU
More informationThe Variance of Covariance Rules for Associative Matrix Memories and Reinforcement Learning
NOTE Communicated by David Willshaw The Variance of Covariance Rules for Associative Matrix Memories and Reinforcement Learning Peter Dayan Terrence J. Sejnowski Computational Neurobiology Laboratory,
More informationF.P. Battaglia 1. Istituto di Fisica. Universita di Roma, La Sapienza, Ple Aldo Moro, Roma and. S. Fusi
partially structured synaptic transitions F.P. Battaglia 1 Istituto di Fisica Universita di Roma, La Sapienza, Ple Aldo oro, Roma and S. Fusi INFN, Sezione dell'istituto Superiore di Sanita, Viale Regina
More informationESANN'1999 proceedings - European Symposium on Artificial Neural Networks Bruges (Belgium), April 1999, D-Facto public., ISBN X, pp.
Statistical mechanics of support vector machines Arnaud Buhot and Mirta B. Gordon Department de Recherche Fondamentale sur la Matiere Condensee CEA-Grenoble, 17 rue des Martyrs, 38054 Grenoble Cedex 9,
More informationStorage Capacity of Letter Recognition in Hopfield Networks
Storage Capacity of Letter Recognition in Hopfield Networks Gang Wei (gwei@cs.dal.ca) Zheyuan Yu (zyu@cs.dal.ca) Faculty of Computer Science, Dalhousie University, Halifax, N.S., Canada B3H 1W5 Abstract:
More informationShigetaka Fujita. Rokkodai, Nada, Kobe 657, Japan. Haruhiko Nishimura. Yashiro-cho, Kato-gun, Hyogo , Japan. Abstract
KOBE-TH-94-07 HUIS-94-03 November 1994 An Evolutionary Approach to Associative Memory in Recurrent Neural Networks Shigetaka Fujita Graduate School of Science and Technology Kobe University Rokkodai, Nada,
More informationProperties of Associative Memory Model with the β-th-order Synaptic Decay
Regular Paper Properties of Associative Memory Model with the β-th-order Synaptic Decay Ryota Miyata 1,2 Toru Aonishi 1 Jun Tsuzurugi 3 Koji Kurata 4,a) Received: January 30, 2013, Revised: March 20, 2013/June
More informationStochastic Wilson-Cowan equations for networks of excitatory and inhibitory neurons II
Stochastic Wilson-Cowan equations for networks of excitatory and inhibitory neurons II Jack Cowan Mathematics Department and Committee on Computational Neuroscience University of Chicago 1 A simple Markov
More informationStochastic Dynamics of Learning with Momentum. in Neural Networks. Department of Medical Physics and Biophysics,
Stochastic Dynamics of Learning with Momentum in Neural Networks Wim Wiegerinck Andrzej Komoda Tom Heskes 2 Department of Medical Physics and Biophysics, University of Nijmegen, Geert Grooteplein Noord
More informationAn Introductory Course in Computational Neuroscience
An Introductory Course in Computational Neuroscience Contents Series Foreword Acknowledgments Preface 1 Preliminary Material 1.1. Introduction 1.1.1 The Cell, the Circuit, and the Brain 1.1.2 Physics of
More informationIn Advances in Neural Information Processing Systems 6. J. D. Cowan, G. Tesauro and. Convergence of Indirect Adaptive. Andrew G.
In Advances in Neural Information Processing Systems 6. J. D. Cowan, G. Tesauro and J. Alspector, (Eds.). Morgan Kaufmann Publishers, San Fancisco, CA. 1994. Convergence of Indirect Adaptive Asynchronous
More informationof the dynamics. There is a competition between the capacity of the network and the stability of the
Special Issue on the Role and Control of Random Events in Biological Systems c World Scientic Publishing Company LEARNING SYNFIRE CHAINS: TURNING NOISE INTO SIGNAL JOHN HERTZ and ADAM PRUGEL-BENNETT y
More informationSupporting Online Material for
www.sciencemag.org/cgi/content/full/319/5869/1543/dc1 Supporting Online Material for Synaptic Theory of Working Memory Gianluigi Mongillo, Omri Barak, Misha Tsodyks* *To whom correspondence should be addressed.
More informationInfluence of Criticality on 1/f α Spectral Characteristics of Cortical Neuron Populations
Influence of Criticality on 1/f α Spectral Characteristics of Cortical Neuron Populations Robert Kozma rkozma@memphis.edu Computational Neurodynamics Laboratory, Department of Computer Science 373 Dunn
More informationSynaptic plasticity in neuromorphic hardware. Stefano Fusi Columbia University
Synaptic plasticity in neuromorphic hardware Stefano Fusi Columbia University The memory problem Several efficient memory models assume that the synaptic dynamic variables are unbounded, or can be modified
More informationStochastic Oscillator Death in Globally Coupled Neural Systems
Journal of the Korean Physical Society, Vol. 52, No. 6, June 2008, pp. 19131917 Stochastic Oscillator Death in Globally Coupled Neural Systems Woochang Lim and Sang-Yoon Kim y Department of Physics, Kangwon
More informationHopfield Neural Network
Lecture 4 Hopfield Neural Network Hopfield Neural Network A Hopfield net is a form of recurrent artificial neural network invented by John Hopfield. Hopfield nets serve as content-addressable memory systems
More informationHOPFIELD neural networks (HNNs) are a class of nonlinear
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: EXPRESS BRIEFS, VOL. 52, NO. 4, APRIL 2005 213 Stochastic Noise Process Enhancement of Hopfield Neural Networks Vladimir Pavlović, Member, IEEE, Dan Schonfeld,
More informationarxiv:cond-mat/ v1 [cond-mat.dis-nn] 30 Sep 1999
arxiv:cond-mat/9909443v1 [cond-mat.dis-nn] 30 Sep 1999 Thresholds in layered neural networks with variable activity 1. Introduction D Bollé and G Massolo Instituut voor Theoretische Fysica, K.U. Leuven,
More informationActivity Driven Adaptive Stochastic. Resonance. Gregor Wenning and Klaus Obermayer. Technical University of Berlin.
Activity Driven Adaptive Stochastic Resonance Gregor Wenning and Klaus Obermayer Department of Electrical Engineering and Computer Science Technical University of Berlin Franklinstr. 8/9, 187 Berlin fgrewe,obyg@cs.tu-berlin.de
More informationArtificial Intelligence Hopfield Networks
Artificial Intelligence Hopfield Networks Andrea Torsello Network Topologies Single Layer Recurrent Network Bidirectional Symmetric Connection Binary / Continuous Units Associative Memory Optimization
More informationFast pruning using principal components
Oregon Health & Science University OHSU Digital Commons CSETech January 1993 Fast pruning using principal components Asriel U. Levin Todd K. Leen John E. Moody Follow this and additional works at: http://digitalcommons.ohsu.edu/csetech
More informationA Dynamical Implementation of Self-organizing Maps. Wolfgang Banzhaf. Department of Computer Science, University of Dortmund, Germany.
Fraunhofer Institut IIE, 994, pp. 66 73 A Dynamical Implementation of Self-organizing Maps Abstract Wolfgang Banzhaf Department of Computer Science, University of Dortmund, Germany and Manfred Schmutz
More informationNeuronal Tuning: To Sharpen or Broaden?
NOTE Communicated by Laurence Abbott Neuronal Tuning: To Sharpen or Broaden? Kechen Zhang Howard Hughes Medical Institute, Computational Neurobiology Laboratory, Salk Institute for Biological Studies,
More informationMemory capacity of networks with stochastic binary synapses
Memory capacity of networks with stochastic binary synapses Alexis M. Dubreuil 1,, Yali Amit 3 and Nicolas Brunel 1, 1 UMR 8118, CNRS, Université Paris Descartes, Paris, France Departments of Statistics
More informationDynamical Systems and Deep Learning: Overview. Abbas Edalat
Dynamical Systems and Deep Learning: Overview Abbas Edalat Dynamical Systems The notion of a dynamical system includes the following: A phase or state space, which may be continuous, e.g. the real line,
More informationSupport Vector Machines vs Multi-Layer. Perceptron in Particle Identication. DIFI, Universita di Genova (I) INFN Sezione di Genova (I) Cambridge (US)
Support Vector Machines vs Multi-Layer Perceptron in Particle Identication N.Barabino 1, M.Pallavicini 2, A.Petrolini 1;2, M.Pontil 3;1, A.Verri 4;3 1 DIFI, Universita di Genova (I) 2 INFN Sezione di Genova
More informationEffects of Interactive Function Forms in a Self-Organized Critical Model Based on Neural Networks
Commun. Theor. Phys. (Beijing, China) 40 (2003) pp. 607 613 c International Academic Publishers Vol. 40, No. 5, November 15, 2003 Effects of Interactive Function Forms in a Self-Organized Critical Model
More informationApproximate Optimal-Value Functions. Satinder P. Singh Richard C. Yee. University of Massachusetts.
An Upper Bound on the oss from Approximate Optimal-Value Functions Satinder P. Singh Richard C. Yee Department of Computer Science University of Massachusetts Amherst, MA 01003 singh@cs.umass.edu, yee@cs.umass.edu
More informationarxiv: v1 [cond-mat.dis-nn] 9 May 2008
Functional Optimization in Complex Excitable Networks Samuel Johnson, J. Marro, and Joaquín J. Torres Departamento de Electromagnetismo y Física de la Materia, and Institute Carlos I for Theoretical and
More informationIn: Proc. BENELEARN-98, 8th Belgian-Dutch Conference on Machine Learning, pp 9-46, 998 Linear Quadratic Regulation using Reinforcement Learning Stephan ten Hagen? and Ben Krose Department of Mathematics,
More informationIntroduction to Neural Networks
Introduction to Neural Networks What are (Artificial) Neural Networks? Models of the brain and nervous system Highly parallel Process information much more like the brain than a serial computer Learning
More informationCoherence detection in a spiking neuron via Hebbian learning
Neurocomputing 44 46 (2002) 133 139 www.elsevier.com/locate/neucom Coherence detection in a spiking neuron via Hebbian learning L. Perrinet, M. Samuelides ONERA-DTIM, 2 Av. E. Belin, BP 4025, 31055 Toulouse,
More informationEquivalence of Backpropagation and Contrastive Hebbian Learning in a Layered Network
LETTER Communicated by Geoffrey Hinton Equivalence of Backpropagation and Contrastive Hebbian Learning in a Layered Network Xiaohui Xie xhx@ai.mit.edu Department of Brain and Cognitive Sciences, Massachusetts
More informationNeural variability and Poisson statistics
Neural variability and Poisson statistics January 15, 2014 1 Introduction We are in the process of deriving the Hodgkin-Huxley model. That model describes how an action potential is generated by ion specic
More informationHopfield Neural Network and Associative Memory. Typical Myelinated Vertebrate Motoneuron (Wikipedia) Topic 3 Polymers and Neurons Lecture 5
Hopfield Neural Network and Associative Memory Typical Myelinated Vertebrate Motoneuron (Wikipedia) PHY 411-506 Computational Physics 2 1 Wednesday, March 5 1906 Nobel Prize in Physiology or Medicine.
More informationNeural spike statistics modify the impact of background noise
Neurocomputing 38}40 (2001) 445}450 Neural spike statistics modify the impact of background noise Stefan D. Wilke*, Christian W. Eurich Institut fu( r Theoretische Physik, Universita( t Bremen, Postfach
More informationMemories Associated with Single Neurons and Proximity Matrices
Memories Associated with Single Neurons and Proximity Matrices Subhash Kak Oklahoma State University, Stillwater Abstract: This paper extends the treatment of single-neuron memories obtained by the use
More informationEffects of Interactive Function Forms and Refractoryperiod in a Self-Organized Critical Model Based on Neural Networks
Commun. Theor. Phys. (Beijing, China) 42 (2004) pp. 121 125 c International Academic Publishers Vol. 42, No. 1, July 15, 2004 Effects of Interactive Function Forms and Refractoryperiod in a Self-Organized
More informationThe Bias-Variance dilemma of the Monte Carlo. method. Technion - Israel Institute of Technology, Technion City, Haifa 32000, Israel
The Bias-Variance dilemma of the Monte Carlo method Zlochin Mark 1 and Yoram Baram 1 Technion - Israel Institute of Technology, Technion City, Haifa 32000, Israel fzmark,baramg@cs.technion.ac.il Abstract.
More informationEntrainment and Chaos in the Hodgkin-Huxley Oscillator
Entrainment and Chaos in the Hodgkin-Huxley Oscillator Kevin K. Lin http://www.cims.nyu.edu/ klin Courant Institute, New York University Mostly Biomath - 2005.4.5 p.1/42 Overview (1) Goal: Show that the
More informationBiological Modeling of Neural Networks:
Week 14 Dynamics and Plasticity 14.1 Reservoir computing - Review:Random Networks - Computing with rich dynamics Biological Modeling of Neural Networks: 14.2 Random Networks - stationary state - chaos
More informationAkinori Sekiguchi and Yoshihiko Nakamura. Dept. of Mechano-Informatics, University of Tokyo Hongo, Bunkyo-Ku, Tokyo , Japan
The Chaotic Mobile Robot Akinori Sekiguchi and Yoshihiko Nakamura Dept. of Mechano-Informatics, University of Tokyo 7-- Hongo, Bunkyo-Ku, Tokyo -866, Japan ABSTRACT In this paper, we develop a method to
More informationW (x) W (x) (b) (a) 1 N
Delay Adaptation in the Nervous System Christian W. Eurich a, Klaus Pawelzik a, Udo Ernst b, Andreas Thiel a, Jack D. Cowan c and John G. Milton d a Institut fur Theoretische Physik, Universitat Bremen,
More informationOTHER SYNCHRONIZATION EXAMPLES IN NATURE 1667 Christiaan Huygens: synchronization of pendulum clocks hanging on a wall networks of coupled Josephson j
WHAT IS RHYTHMIC APPLAUSE? a positive manifestation of the spectators after an exceptional performance spectators begin to clap in phase (synchronization of the clapping) appears after the initial thunderous
More informationProbabilistic Models in Theoretical Neuroscience
Probabilistic Models in Theoretical Neuroscience visible unit Boltzmann machine semi-restricted Boltzmann machine restricted Boltzmann machine hidden unit Neural models of probabilistic sampling: introduction
More informationa subset of these N input variables. A naive method is to train a new neural network on this subset to determine this performance. Instead of the comp
Input Selection with Partial Retraining Pierre van de Laar, Stan Gielen, and Tom Heskes RWCP? Novel Functions SNN?? Laboratory, Dept. of Medical Physics and Biophysics, University of Nijmegen, The Netherlands.
More informationMODEL NEURONS: FROM HODGKIN-HUXLEY TO HOPFIELD. L.F. Abbott and Thomas B. Kepler. Physics and Biology Departments. and. Center for Complex Systems
MODEL NEURONS: FROM HODGKIN-HUXLEY TO HOPFIELD L.F. Abbott and Thomas B. Kepler Physics and Biology Departments and Center for Complex Systems Brandeis University Waltham, MA 02254 Abstract A new technique
More informationPower Calculations for Preclinical Studies Using a K-Sample Rank Test and the Lehmann Alternative Hypothesis
Power Calculations for Preclinical Studies Using a K-Sample Rank Test and the Lehmann Alternative Hypothesis Glenn Heller Department of Epidemiology and Biostatistics, Memorial Sloan-Kettering Cancer Center,
More information898 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 17, NO. 6, DECEMBER X/01$ IEEE
898 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 17, NO. 6, DECEMBER 2001 Short Papers The Chaotic Mobile Robot Yoshihiko Nakamura and Akinori Sekiguchi Abstract In this paper, we develop a method
More informationQuasi-Stationary Simulation: the Subcritical Contact Process
Brazilian Journal of Physics, vol. 36, no. 3A, September, 6 685 Quasi-Stationary Simulation: the Subcritical Contact Process Marcelo Martins de Oliveira and Ronald Dickman Departamento de Física, ICEx,
More informationViewpoint invariant face recognition using independent component analysis and attractor networks
Viewpoint invariant face recognition using independent component analysis and attractor networks Marian Stewart Bartlett University of California San Diego The Salk Institute La Jolla, CA 92037 marni@salk.edu
More informationInformation Capacity of Binary Weights Associative. Memories. Mathematical Sciences. University at Bualo. Bualo NY
Information Capacity of Binary Weights Associative Memories Arun Jagota Department of Computer Science University of California, Santa Cruz CA 95064 jagota@cse.ucsc.edu Kenneth W. Regan University at Bualo
More informationProblem Set Number 02, j/2.036j MIT (Fall 2018)
Problem Set Number 0, 18.385j/.036j MIT (Fall 018) Rodolfo R. Rosales (MIT, Math. Dept., room -337, Cambridge, MA 0139) September 6, 018 Due October 4, 018. Turn it in (by 3PM) at the Math. Problem Set
More informationLearning in Boltzmann Trees. Lawrence Saul and Michael Jordan. Massachusetts Institute of Technology. Cambridge, MA January 31, 1995.
Learning in Boltzmann Trees Lawrence Saul and Michael Jordan Center for Biological and Computational Learning Massachusetts Institute of Technology 79 Amherst Street, E10-243 Cambridge, MA 02139 January
More informationModel of a Biological Neuron as a Temporal Neural Network
Model of a Biological Neuron as a Temporal Neural Network Sean D. Murphy and Edward W. Kairiss Interdepartmental Neuroscience Program, Department of Psychology, and The Center for Theoretical and Applied
More informationIn#uence of dendritic topologyon "ring patterns in model neurons
Neurocomputing 38}40 (2001) 183}189 In#uence of dendritic topologyon "ring patterns in model neurons Jacob Duijnhouwer, Michiel W.H. Remme, Arjen van Ooyen*, Jaap van Pelt Netherlands Institute for Brain
More informationFinite-temperature magnetism of ultrathin lms and nanoclusters PhD Thesis Booklet. Levente Rózsa Supervisor: László Udvardi
Finite-temperature magnetism of ultrathin lms and nanoclusters PhD Thesis Booklet Levente Rózsa Supervisor: László Udvardi BME 2016 Background of the research Magnetic materials continue to play an ever
More informationAssociative Memories (I) Hopfield Networks
Associative Memories (I) Davide Bacciu Dipartimento di Informatica Università di Pisa bacciu@di.unipi.it Applied Brain Science - Computational Neuroscience (CNS) A Pun Associative Memories Introduction
More informationIterative procedure for multidimesional Euler equations Abstracts A numerical iterative scheme is suggested to solve the Euler equations in two and th
Iterative procedure for multidimensional Euler equations W. Dreyer, M. Kunik, K. Sabelfeld, N. Simonov, and K. Wilmanski Weierstra Institute for Applied Analysis and Stochastics Mohrenstra e 39, 07 Berlin,
More informationMethods for Estimating the Computational Power and Generalization Capability of Neural Microcircuits
Methods for Estimating the Computational Power and Generalization Capability of Neural Microcircuits Wolfgang Maass, Robert Legenstein, Nils Bertschinger Institute for Theoretical Computer Science Technische
More informationFunctional optimization in complex excitable networks
August 28 EPL, 83 (28) 466 doi:.29/295-575/83/466 www.epljournal.org S. Johnson, J. Marro and J. J. Torres Departamento de Electromagnetismo y Física de la Materia, and Institute Carlos I for Theoretical
More informationSTOCHASTIC PROCESSES IN PHYSICS AND CHEMISTRY
STOCHASTIC PROCESSES IN PHYSICS AND CHEMISTRY Third edition N.G. VAN KAMPEN Institute for Theoretical Physics of the University at Utrecht ELSEVIER Amsterdam Boston Heidelberg London New York Oxford Paris
More informationBatch-mode, on-line, cyclic, and almost cyclic learning 1 1 Introduction In most neural-network applications, learning plays an essential role. Throug
A theoretical comparison of batch-mode, on-line, cyclic, and almost cyclic learning Tom Heskes and Wim Wiegerinck RWC 1 Novel Functions SNN 2 Laboratory, Department of Medical hysics and Biophysics, University
More informationA Robust PCA by LMSER Learning with Iterative Error. Bai-ling Zhang Irwin King Lei Xu.
A Robust PCA by LMSER Learning with Iterative Error Reinforcement y Bai-ling Zhang Irwin King Lei Xu blzhang@cs.cuhk.hk king@cs.cuhk.hk lxu@cs.cuhk.hk Department of Computer Science The Chinese University
More informationRobust linear optimization under general norms
Operations Research Letters 3 (004) 50 56 Operations Research Letters www.elsevier.com/locate/dsw Robust linear optimization under general norms Dimitris Bertsimas a; ;, Dessislava Pachamanova b, Melvyn
More informationControllingthe spikingactivity in excitable membranes via poisoning
Physica A 344 (2004) 665 670 www.elsevier.com/locate/physa Controllingthe spikingactivity in excitable membranes via poisoning Gerhard Schmid, Igor Goychuk, Peter Hanggi Universitat Augsburg, Institut
More informationThe wake-sleep algorithm for unsupervised neural networks
The wake-sleep algorithm for unsupervised neural networks Geoffrey E Hinton Peter Dayan Brendan J Frey Radford M Neal Department of Computer Science University of Toronto 6 King s College Road Toronto
More informationMathematics Research Report No. MRR 003{96, HIGH RESOLUTION POTENTIAL FLOW METHODS IN OIL EXPLORATION Stephen Roberts 1 and Stephan Matthai 2 3rd Febr
HIGH RESOLUTION POTENTIAL FLOW METHODS IN OIL EXPLORATION Stephen Roberts and Stephan Matthai Mathematics Research Report No. MRR 003{96, Mathematics Research Report No. MRR 003{96, HIGH RESOLUTION POTENTIAL
More informationBoxlets: a Fast Convolution Algorithm for. Signal Processing and Neural Networks. Patrice Y. Simard, Leon Bottou, Patrick Haner and Yann LeCun
Boxlets: a Fast Convolution Algorithm for Signal Processing and Neural Networks Patrice Y. Simard, Leon Bottou, Patrick Haner and Yann LeCun AT&T Labs-Research 100 Schultz Drive, Red Bank, NJ 07701-7033
More information7 Rate-Based Recurrent Networks of Threshold Neurons: Basis for Associative Memory
Physics 178/278 - David Kleinfeld - Fall 2005; Revised for Winter 2017 7 Rate-Based Recurrent etworks of Threshold eurons: Basis for Associative Memory 7.1 A recurrent network with threshold elements The
More informationMapping Closure Approximation to Conditional Dissipation Rate for Turbulent Scalar Mixing
NASA/CR--1631 ICASE Report No. -48 Mapping Closure Approximation to Conditional Dissipation Rate for Turbulent Scalar Mixing Guowei He and R. Rubinstein ICASE, Hampton, Virginia ICASE NASA Langley Research
More informationEffects of refractory periods in the dynamics of a diluted neural network
Effects of refractory periods in the dynamics of a diluted neural network F. A. Tamarit, 1, * D. A. Stariolo, 2, * S. A. Cannas, 2, *, and P. Serra 2, 1 Facultad de Matemática, Astronomía yfísica, Universidad
More informationSpurious Chaotic Solutions of Dierential. Equations. Sigitas Keras. September Department of Applied Mathematics and Theoretical Physics
UNIVERSITY OF CAMBRIDGE Numerical Analysis Reports Spurious Chaotic Solutions of Dierential Equations Sigitas Keras DAMTP 994/NA6 September 994 Department of Applied Mathematics and Theoretical Physics
More informationStationary Bumps in Networks of Spiking Neurons
LETTER Communicated by Misha Tsodyks Stationary Bumps in Networks of Spiking Neurons Carlo R. Laing Carson C. Chow Department of Mathematics, University of Pittsburgh, Pittsburgh PA 1526, U.S.A. We examine
More informationadap-org/ Jan 1994
Self-organized criticality in living systems C. Adami W. K. Kellogg Radiation Laboratory, 106{38, California Institute of Technology Pasadena, California 91125 USA (December 20,1993) adap-org/9401001 27
More informationNeural Nets and Symbolic Reasoning Hopfield Networks
Neural Nets and Symbolic Reasoning Hopfield Networks Outline The idea of pattern completion The fast dynamics of Hopfield networks Learning with Hopfield networks Emerging properties of Hopfield networks
More informationField indeced pattern simulation and spinodal. point in nematic liquid crystals. Chun Zheng Frank Lonberg Robert B. Meyer. June 18, 1995.
Field indeced pattern simulation and spinodal point in nematic liquid crystals Chun Zheng Frank Lonberg Robert B. Meyer June 18, 1995 Abstract We explore the novel periodic Freeredicksz Transition found
More informationProceedings of Neural, Parallel, and Scientific Computations 4 (2010) xx-xx PHASE OSCILLATOR NETWORK WITH PIECEWISE-LINEAR DYNAMICS
Proceedings of Neural, Parallel, and Scientific Computations 4 (2010) xx-xx PHASE OSCILLATOR NETWORK WITH PIECEWISE-LINEAR DYNAMICS WALTER GALL, YING ZHOU, AND JOSEPH SALISBURY Department of Mathematics
More informationSchiestel s Derivation of the Epsilon Equation and Two Equation Modeling of Rotating Turbulence
NASA/CR-21-2116 ICASE Report No. 21-24 Schiestel s Derivation of the Epsilon Equation and Two Equation Modeling of Rotating Turbulence Robert Rubinstein NASA Langley Research Center, Hampton, Virginia
More information1 Introduction Tasks like voice or face recognition are quite dicult to realize with conventional computer systems, even for the most powerful of them
Information Storage Capacity of Incompletely Connected Associative Memories Holger Bosch Departement de Mathematiques et d'informatique Ecole Normale Superieure de Lyon Lyon, France Franz Kurfess Department
More informationAbstract. In this paper we propose recurrent neural networks with feedback into the input
Recurrent Neural Networks for Missing or Asynchronous Data Yoshua Bengio Dept. Informatique et Recherche Operationnelle Universite de Montreal Montreal, Qc H3C-3J7 bengioy@iro.umontreal.ca Francois Gingras
More informationλ-universe: Introduction and Preliminary Study
λ-universe: Introduction and Preliminary Study ABDOLREZA JOGHATAIE CE College Sharif University of Technology Azadi Avenue, Tehran IRAN Abstract: - Interactions between the members of an imaginary universe,
More informationMODELS OF LEARNING AND THE POLAR DECOMPOSITION OF BOUNDED LINEAR OPERATORS
Eighth Mississippi State - UAB Conference on Differential Equations and Computational Simulations. Electronic Journal of Differential Equations, Conf. 19 (2010), pp. 31 36. ISSN: 1072-6691. URL: http://ejde.math.txstate.edu
More informationhidden -> output input -> hidden PCA 1 pruned 2 pruned pruned 6 pruned 7 pruned 8 9 pruned 10 pruned 11 pruned pruned 14 pruned 15
LOCOCODE PERFORMS NONLINEAR ICA WITHOUT KNOWING THE NUMBER OF SOURCES Sepp Hochreiter Fakultat fur Informatik Technische Universitat Munchen 8090 Munchen, Germany hochreit@informatik.tu-muenchen.de Jurgen
More informationReinforcement Learning, Neural Networks and PI Control Applied to a Heating Coil
Reinforcement Learning, Neural Networks and PI Control Applied to a Heating Coil Charles W. Anderson 1, Douglas C. Hittle 2, Alon D. Katz 2, and R. Matt Kretchmar 1 1 Department of Computer Science Colorado
More informationRate- and Phase-coded Autoassociative Memory
Rate- and Phase-coded Autoassociative Memory Máté Lengyel Peter Dayan Gatsby Computational Neuroscience Unit, University College London 7 Queen Square, London WCN 3AR, United Kingdom {lmate,dayan}@gatsby.ucl.ac.uk
More informationError Empirical error. Generalization error. Time (number of iteration)
Submitted to Neural Networks. Dynamics of Batch Learning in Multilayer Networks { Overrealizability and Overtraining { Kenji Fukumizu The Institute of Physical and Chemical Research (RIKEN) E-mail: fuku@brain.riken.go.jp
More informationForce Field for Water Based on Neural Network
Force Field for Water Based on Neural Network Hao Wang Department of Chemistry, Duke University, Durham, NC 27708, USA Weitao Yang* Department of Chemistry, Duke University, Durham, NC 27708, USA Department
More informationBursting and Chaotic Activities in the Nonlinear Dynamics of FitzHugh-Rinzel Neuron Model
Bursting and Chaotic Activities in the Nonlinear Dynamics of FitzHugh-Rinzel Neuron Model Abhishek Yadav *#, Anurag Kumar Swami *, Ajay Srivastava * * Department of Electrical Engineering, College of Technology,
More informationLateral organization & computation
Lateral organization & computation review Population encoding & decoding lateral organization Efficient representations that reduce or exploit redundancy Fixation task 1rst order Retinotopic maps Log-polar
More informationBasic Principles of Unsupervised and Unsupervised
Basic Principles of Unsupervised and Unsupervised Learning Toward Deep Learning Shun ichi Amari (RIKEN Brain Science Institute) collaborators: R. Karakida, M. Okada (U. Tokyo) Deep Learning Self Organization
More informationVisual Selection and Attention Shifting Based on FitzHugh-Nagumo Equations
Visual Selection and Attention Shifting Based on FitzHugh-Nagumo Equations Haili Wang, Yuanhua Qiao, Lijuan Duan, Faming Fang, Jun Miao 3, and Bingpeng Ma 3 College of Applied Science, Beijing University
More information