Liquid Computing. Wolfgang Maass. Institut für Grundlagen der Informationsverarbeitung Technische Universität Graz, Austria

Size: px
Start display at page:

Download "Liquid Computing. Wolfgang Maass. Institut für Grundlagen der Informationsverarbeitung Technische Universität Graz, Austria"

Transcription

1 NNB SS10 1 Liquid Computing Wolfgang Maass Institut für Grundlagen der Informationsverarbeitung Technische Universität Graz, Austria Institute for Theoretical Computer Science

2 NNB SS10 2 How can we understand the computations carried out by the brain? neurons, synapses

3 A common mistake: NNB SS10 3 Trying to understand the computational organization of the brain from the perspective of a digital computer IBM Blue-Gene supercomputer One obvious difference: computers are programmed; brains largely have to learn what to compute.

4 NNB SS10 4 Computers carry out offline computations (unless they are controlling a process) where different input components are presented all at once, and there is no strict bound on the computation time.

5 Typical computations in the brain are online computations NNB SS10 5 In online computations there arrive all the time new input components. A real-time algorithm produces for each new input component an output within a fixed time interval. An anytime algorithm can be prompted at any time to provide its current best guess of a proper output (which should integrate as many of the previously arrived input-pieces as possible).

6 NNB SS10 6 Thus if you have a computer model of a cortical microcircuit, or of a larger neural system, it is not clear how you should test its computational capability. Relatively few studies have tried to test neural circuit models on difficult computational tasks. It would be desirable, to collect a few such tasks on a website, which can then be used by many Labs to evaluate the computational capabilites (and/or learning capabilities) of their neural circuit models.

7 NNB SS10 7 Summary: Obvious differences in the organization of brains and computers The computations in computers are programmed, whereas most computations in brains result from learning (or need at least permanent retuning) Brains carry out online computations (rather than offline computations), probably even anytime computations Neural circuits consist of heterogeneous components, that all have a different inherent temporal dynamics.

8 Distribution of different anatomical types of inhibitory neurons on different cortical layers NNB SS10 8

9 NNB SS10 9 All these different types of neurons not only have a different anatomy and connection pattern, they also respond in different ways to the same input (here shown for a step current):

10 Short term dynamics of synapses: every synapse has a complex inherent temporal dynamics (and can NOT be modeled by a single parameter w, like in artificial neural networks). NNB SS10 10 Model for a dynamic synapse with parameters w, U, D, F according to [Markram, Wang, Tsodyks, PNAS 1998]: The amplitude A k of the PSP for the k th spike in a spike train with interspike intervals 1, 2,, k-1 is modeled by the equations A k = w u k R k u k = U + u k-1 (1-U) exp(- k-1 /F) R k = 1 + (R k-1 -u k-1 R k-1-1) exp(- k-1 /D)

11 NNB SS10 11 Functional consequence of the inherent dynamics of synapses: Different spikes produce different postsynaptic responses, depending on the position of the spike within a spike train Shown here are the amplitudes of synaptic responses of two common types of synapses to the same spike train (F1 is facilitating and F2 is depressing): one spike train, sent to two synapses output amplitudes of synapse output amplitudes of synapse

12 NNB SS10 12 Consequence: Standard computational models from computer science, such as Turing machines are not adequate for understanding brain-style computations

13 NNB SS10 13 Insert: Turing machines Deterministic Turing machine with a single bi-infinite tape and a single tape head are defined as a 6-tuple (Q, Σ, B, δ, q 1,H ), where 1. Q is a finite set of states with start state q 1 Q. 2. Σ is a finite set of tape symbols with blank symbol B Σ. 3. δ: Q Σ Σ D Q is a transition function, defined for all q Q and tape head movements D {L,R}. 4. H is a the set of halt states H Q. 5. If q H then δ is undefined for q Σ (where it halts)

14 An alternative: Recurrent neural networks, where computations are viewed as trajectories to an attractor (that encodes the result of the computation) in the resulting dynamical system. NNB SS10 14 Problems with this model: Hard to get it to work (and learn) with realistic heterogeneous neural components Not suitable for online computing (especially not for real-time computing) Experimental data rarely show convergence to attractors, rather they suggest that neural circuits are permanently perturbed,

15 NNB SS10 15 Trajectory of the response of 60 neurons in primary visual cortex of cat to a static pattern 60 A 60 D Unit index Unit index Time [ms] Time [ms] [Nikolic, Haeusler, Singer, Maass, 2009]

16 Comparison of the first 3 principal components of the two trajectories in response to stimuli A and D NNB SS10 16

17 Therefore we have proposed Liquid Computing as a paradigm for understanding computations in the brain [Maass, Natschläger, Markram, 2002] NNB SS10 17 It is an attempt to understand online computing in dynamical systems computing with trajectories (rather than attractors) in dynamical systems how neural circuits can function in spite of their heterogeneous components (in fact: why they need heterogeneous components) neural circuits from the perspective of learning (how could they optimally support learning?) how different computations can be multi-plexed within the same neural circuit how neural circuits constructed acording to anatomical and physiological data (rather than according to the ideas of a theoretician) can carry out complex computations.

18 NNB SS10 18 Resulting computational model for a cortical microcircuit This model assumes that projection neurons on layers 2/3 and layers 5/6 learn to read out information from the state trajectory of a cortical microcircuit One could in principle apply this computational model also to the whole cortex, with the medium spiny neurons of the striatum as readout.

19 NNB SS10 19 In a first approximation one can model the computational operation of a projection neuron by a linear gate (i.e., by a weighted sum of presynaptic spike trains, which are low-pass filtered to model their contribution to the membrane potential of the readout neuron) Spikes from presynaptic neurons in the circuit W 1 Postsynaptic potentials in the readout neuron caused by these spikes W 2 W d Σ We sometimes refer to the high-dimensional analog input to such readout (each component of which results from low-pass filtering the spike train of a presynaptic neuron) as a liquid state.

20 NNB SS10 20 A cortical microcircuit could support the learning capability of linear projection neurons by providing : an analog fading memory (in order to accumulate information over time in the liquid state, so that it can provide at any time a summary of recent inputs) sparse activity (for faster learning, especially of readouts with non-negative weights) subcircuits which extract features that are useful for the tasks of many projection neurons ( multiplexing ) a nonlinear projection into a high-dimensional space (kernel property)

21 Insert: What is a kernel (in the terminology of machine learning)? NNB SS10 21 A kernel provides numerous nonlinear combinations of input variables, in order to boost the expressive power of any subsequent linear readout. Example: If a circuit precomputes all products x i x j of n input variables x 1,...,x n, then a subsequent linear readout can compute any quadratic function of the original input variables x 1,...,x n. More abstractly, a kernel should map saliently different input vectors onto linearly independent output vectors (note that this more general computational goal of a cortical circuit does not require precise execution of any nonlinear computation).

22 Insert: What is a kernel (in the terminology of machine learning)? NNB SS10 22 Important consequences for a biological interpretation : Reducing plasticity to linear readouts has the advantage that learning cannot get stuck in local minima of the error function. In addition the same fixed kernel can serve many different linear readouts.

23 NNB SS10 23 What range of computations (on the stream of circuit inputs) can in principle be carried out by a projection neuron if they are supported by generic preprocessing in the cortical microcircuit? (in the absence of noise, using a mean field model for the circuit): 1. By a suitable adjustment of its weights, a readout neuron can be trained to approximate any Volterra series, provided that the microcircuit is sufficiently large, and consists of sufficiently diverse components. [Maass, Markram, 2004] 2. If one allows feedback from projection neurons back into the circuit, and if a readout neuron can learn to compute any static continuous function, then this model becomes universal for analog (and digital) computation on input streams. [Maass, Joshi, Sontag, PLOS Comp. Biol. 2007]

24 Volterra Series NNB SS10 24 These are those computational operations (filters) F on input streams u(s) that are time-invariant (i.e, input driven), and only require a fading memory 1 d τ1 h1 ( τ1) u ( t 1) 0 ( F u( )) ( t) = α τ + α 2 d τ1 d τ 2 h1 ( τ1, τ 2) u ( t τ1) u ( t τ 2) 0 0 +

25 Theorem: (based on [Boyd and Chua, 1985]) if there is a rich enough pool B of basis filters (time invariant, with fading memory) from which the basis filters B 1,,B k in the filterbank can be chosen (B needs to have the pointwise separation property) and NNB SS10 25 Any filter F which is defined by a Volterra series can be approximated with any desired degree of precision by the sketched computational model filter output x(t) if there is a rich enough pool R from which the readout functions f can be chosen (R needs to have the universal approximation property, i,e. any bounded continuous function can be approximated by function from R). u(s) for s t B 1... B k y(t) memoryless readout y(t) = f ( x(t)) Def: A class B of basis filters has the pointwise separation property if there exists for any two input functions u( ), v( ) with u(s) v(s) for some s t a basis filter B B with (Bu)(t) (Bv)(t).

26 We have called this simple computational model Liquid State Machine because the state of the dynamical system is allowed to be liquid rather than static. It generalizes finite state machines to continuous input values u(s), continuous output values y(t), and continuous time t. NNB SS10 26

27 NNB SS10 27 The computational power of the model makes a qualitative jump if one allows feedback from trained readout neurons back into the circuit If the readout neuron is a striatal neuron, the feedback would result from the loop back to the cortex (via the thalamus).

28 NNB SS10 28 Theorem : There exists a large class S n of analog circuits C with fading memory (described by systems of n first order differential equations) that acquire through feedback universal computational capabilities for analog computing in the following sense: Note: Any Turing machine can be simulated by such dynamical system [Branicky, 1995], This holds in particular for neural circuits C defined by DEs of the form (under some conditions on the λ i, a ij, b i ).

29 NNB SS10 29 Note: The required feedback functions K and readout functions h are always continuous (and memory-less), hence they provide suitable targets for learning.

30 Testing the liquid computing idea on simple computer models of generic cortical microcircuits (where only the weights of the readouts are trained for specific tasks) : NNB SS10 30 neurons: leaky integrate-and-fire neurons, 20% of them inhibitory, neuron a 2 2 is synaptically connected to neuron b with probability C exp( D ( a, b) / λ ) synapses: dynamic synapses with fixed parameters w, U, D, F chosen from distributions based on empirical data from the Lab of Markram input spike trains injected into 30% randomly chosen neurons, with fixed randomly chosen amplitudes

31 Training 7 different linear readouts for 7 different tasks: NNB SS10 31 Circuit input: 4 Poisson spike trains with firing rates f 1 (t) for spike trains 1 and 2 and firing rates f 2 (t) for spike trains 3 and 4, drawn independently every 30 ms from the interval [0, 80] Hz 7 linear readouts with adjustable weights

32 Testing the performance of this model on a benchmark task: recognition of spoken digits (introduced by Hopfield and Brody in PNAS 2000 and 2001, with a particular transcription of speech into spike trains): NNB SS10 32 recognition of spoken words "zero", "one",... "nine", each spoken 10 times by 5 different speakers, each spoken word encoded into 40 spike trains by Hopfield and Brody (we used 300 examples for training, 200 for testing; note that the circuit constructed by H&B did not require any training)

33 Comparing the performance of generic cortical microcircuits with specially constructed circuits NNB SS10 33 linear readouts from a generic neural microcircuit model (consisting of 135 neurons) recognize after training spoken test-words as well as the ingenious circuit consisting of >> 6000 I&F neurons constructed especially for this task by Hopfield and Brody the generic neural microcircuit model can handle linear time warps in the input at least as well as the circuit constructed to achieve that (and it can also handle nonlinear time warps) the generic neural microcircuit model classifies the spoken word instantly when the word ends (i.e., in real-time), rather than ms later

34 NNB SS10 34 In fact, linear readouts from a generic microcircuit model can also classify the trajectory of circuit states while the word is still spoken. This provides an example for an anytime algorithm. Example: anytime recognition of "one : L "one", speaker 5 40 L "one", speaker 3 L "five", speaker 1 L "eight", speaker 4 input liquid redout time [s] time [s] time [s] time [s]

35 How can a linear readout neuron learn to carry out this classification task, i.e., to fire whenever one is currently spoken? microcircuit readout "one", speaker time [s] "one", speaker time [s] "five", speaker time [s] NNB SS10 35 "eight", speaker time [s] various x(t) w resulting values of w x(t) class "one" class "other" Thus: linear readouts can form complex equivalence classes of circuit states x(t) neuron number state number neuron number state number

36 NNB SS10 36 David Verstraeten (Univ. of Gent) has recently shown that the performance of the generic cortical microcircuit model becomes comparable to that of state of the art speech recognition methods if the transformation from speech to spike trains is done less ad-hoc. In a new EU-Project ORGANIC this new approach towards speech recognition (and reading of handwritten text) will be studied more systematically (using also Echo State Networks, proposed independently by Herbert Jäger)

37 NNB SS10 37 The notion of liquid computing had been taken literally by some people: Fernando and Sojakka Pattern recognition in a bucket: A real liquid brain, ECAL 2003: This paper demonstrates that the waves produced on the surface of water can be used as a medium for a Liquid State Machine. We made a bucket of water, vibrated it with lego motors, filmed the waves with a webcam and put it through a perceptron on matlab and got it to solve the XOR problem and do speech recognition.

38 They injected the same speech data as Hopfield and Brody into a bucket of water. NNB SS10 38

39 Examples for liquid states in a bucket of water: NNB SS10 39 Zero One

40 A computational tasks where feedback from projection neurons is needed: Output at any time t the integral over preceding differences in input rates NNB SS10 40 (this cannot be done by a fading memory circuit!)

41 NNB SS10 41 Result (for test inputs) after training of linear readouts: The continuous attractor CA(t) was trained to approximate t 0 ( r () s r () s ) 1 2 ds

42 Can the liquid computing approach help us to understand biological data? Doesn t it suggest that the intracellular fluid of the brain is enough for computing, or that random connections between neurons suffice? NNB SS10 42

43 NNB SS10 43 Constructing a liquid model for cortical microcircuits 100 μm Dual intracellular recordings in vitro Somatosensory, motor and visual areas of rat and cat. Connection probabilities and strengths (mean PSPs at the soma [mv]) [Thomson et al., 2002]

44 NNB SS10 44 Applying a cortical microcircuit model as a liquid Higher cortical areas Lateral or top down input Cortical Feedforward input Input dim Time [s] Liquid state machine: Maass et al. (2002) Lower cortical areas nonspecific Thalamus 560 HH point neurons dynamic synapses

45 NNB SS10 45 Loss in computational performance if the data-based connectivity structure of the Thomson-circuit is replaced by a random graph (with the same number of neurons and synapses) tasks performance loss linear 12.2 % non-linear 36.9 % memory 32.6 %. all 25.0 % versus A similar performance loss occurs if data-based dynamic synapses are replaced by static ( linear ) synapses if the data-based specification of synaptic dynamics (depending on the type of pre- and postsynaptic neuron) is scrambled.

46 NNB SS10 46 Can the superiority of the laminar model be related to specific structural featues? What aspects of the data-based circuit structure are essential for their superior performance? Strategy: Construct additional control circuits which incorporate only specific structural features of the data-based microcircuit:

47 NNB SS10 47 What structural featues should be considered? Control circuits were generated from both template by randomizing the connectivity structure while keeping certain structural properties: Amorphous circuits Pre- and postsynaptic neuron type (exc./inh.) Degree-controlled circuits [Kannan et al., 1999] Degree distribution of neurons in each population Pre- and postsynaptic neuron type (exc./inh.) Small-world circuits [Kaiser & Hilgetag, 2004; Watts & Strogatz, 1998] Small-world property Data-based circuits Original functional / potential template

48 NNB SS10 48 Structural feature: Clustering Small-world property (Watts, Strogatz, 1998): Higher cluster coefficient C than a random circuit, while maintaining a comparable average shortest path length L. C = 2/3 L = 2 Average fraction of existing links between neighbors of a node. Number of links on the shortest path between two nodes Circuit type Clustering coefficient C Average shortest path length L Amorphous Thomson et al. (2002) (undirected) 0.36 (37% higher) 1.78

49 NNB SS10 49 Structural feature: Degree Distributions Degree: The sum of the incoming (afferent) and outgoing (efferent) links of a node. In-degree and out-degree specify the amount of convergence and divergence of a given node. Random network Scale-free network (count k -γ )

50 Degree distribution of several microcircuit models NNB SS10 50

51 NNB SS10 51 Structural feature: Motifs Motif: Pattern of interconnections occurring either in directed or undirected graphs at a number significantly higher than in randomized versions of the graph. Motif counts of macaque and C. elegans networks. (Sporns & Koetter, 2004)

52 NNB SS10 52 Motif Analysis in the cortical microcircuit models Motif compositions differ significantly from random networks. Degree controlled Z = Count Count Std( Count amorphous amorphous )

53 NNB SS10 53 Impact of structur on performance tasks/circuits amorphous small-world degree-controlled degree-controlled w/o i/o spec. linear 12.2 % 5.3 % -0.6 % 5.6 % non-linear 36.9 % 11.3 % -2.3 % 4.6 % memory 32.6 % 41.6 % 12.0 % 35.8 %. all 25.0 % 15.4 % 1.6 % 12.0 % Average performance decrease of control circuits compared to data-based circuits (Thomson et al., 2002). Considered tasks Linear tasks (e.g. current input firing rates or spike template labels) Non-linear tasks (e.g. multiplication/division of the currrent input firing rates) Memory tasks (e.g. recall of spike templates)

54 NNB SS10 54 Differences in dynamic properties between databased circuits and various types of control circuits Internal dynamics is less influenced by input noise and less chaotic in data-based circuits. Euclidian distance of inputs to readout neurons Euclidean distance L5 readout data based circuit amorphous circuit small world circuit degree controlled circuit degree controlled circuit w/o input or output specificity One spike was moved by 1 ms at time 100 ms (identical background noise and initial conditions) time [ms]

55 NNB SS10 55 Experimentally testable predictions of the liquid computing model for cortical microcircuits Temporal integration of information (fading memory) General purpose nonlinear preprocessing (kernel-property) Diversity of neural readouts from the same cortical microcircuit Sparse activity of cortical microcircuits Spontaneous activity, which is needed to keep the circuit in a suitable dynamic regime for liquid computing

56 NNB SS10 56 Experimental Setup Parallel recordings in anaesthetised cats: electrodes spike sorted units 1 Primary visual cortex (area 17)

57 The first prediction Temporal integration of information had first been tested in primary visual cortex through multi-unit recordings from primary visual cortex in anaesthetized cats [Nikolic, Haeusler, Singer, Maass, 2007] 60 A 60 D Unit index Unit index Time [ms] Time [ms]

58 NNB SS10 58 We trained a linear readout to extract at any time from the trajectory the information about the previously shown letter (in the same way as for our computersimulations) Spikes from presynaptic neurons In the circuit W 1 Postsynaptic potentials in the readout neuron caused by these spikes W 2 W d Σ

59 NNB SS10 59 Performance of the trained linear readout on test data (not used for training) Snapshots from the trajectories in two trials with letters A and D

60 In further experiments the temporal integration of information from several subsequent frames of visual input was analyzed NNB SS10 60

61 NNB SS10 61 A linear readout can extract substantial amounts of information about the first letter, even after the neurons were overwritten by a second letter Snapshots from the trajectories in two trials with letter sequences ABC and DBC

62 NNB SS10 62 Substantially more information can be extracted by linear readouts that know how much time has passed since stimulus onset (shown are here results from 3 different cats) Performance (% correct) A D B C Cat 1 Performance Mean firing rate 80 0 Mean firing rate [Hz] Performance (% correct) A C B E Cat Mean firing rate [Hz] Performance (% correct) A C B E Cat Time [ms] 80 Mean firing rate [Hz]

63 NNB SS10 63 Information from two subsequent letters is nonlinearly combined in the circuit ( kernel property ) A Performance (% correct) A B D C Cat 1 Performance Mean firing rate 80 0 Mean firing rate [Hz] B Performance (% correct) A C B D E Cat 3 1st letter 2nd letter C Performance (% correct) A C B D E Cat 3 XOR External XOR Time [ms]

64 NNB SS10 64 Circuit model of cat primary visual cortex Lateral connections Receptive fields LGN output Syn. connectivity, Thomson et al., Cereb Cortex 12, (2002) 5000 HH Neurons, Destexhe et al., Neuroscience 107, (2001) Dynamic synapses, Markram et al., PNAS 95, (1998) Visual input

65 NNB SS10 65 Simlulated cortical microcircuit show similar response properties Larger fraction of NMDA synapses

66 Simlulated cortical microcircuit show similar computational properties NNB SS10 66

67 Summary I have proposed Liquid Computing as a paradigm for understanding computations in the brain NNB SS10 67 It is an attempt to understand online computing in dynamical systems computing with trajectories (rather than attractors) in dynamical systems how neural circuits can function in spite of their heterogeneous components (in fact: why they need heterogeneous components) neural coding from the perspective of the brain (i.e., perspective of readout neurons) neural circuits from the perspective of learning (how could they optimally support learning?) how different computations can be multiplexed within the same neural system how neural circuits constructed acording to anatomical and physiological data (rather than according to the ideas of a theoreticians) can carry out complex computations.

A Model for Real-Time Computation in Generic Neural Microcircuits

A Model for Real-Time Computation in Generic Neural Microcircuits A Model for Real-Time Computation in Generic Neural Microcircuits Wolfgang Maass, Thomas Natschläger Institute for Theoretical Computer Science Technische Universitaet Graz A-81 Graz, Austria maass, tnatschl

More information

Methods for Estimating the Computational Power and Generalization Capability of Neural Microcircuits

Methods for Estimating the Computational Power and Generalization Capability of Neural Microcircuits Methods for Estimating the Computational Power and Generalization Capability of Neural Microcircuits Wolfgang Maass, Robert Legenstein, Nils Bertschinger Institute for Theoretical Computer Science Technische

More information

Liquid Computing. Wolfgang Maass

Liquid Computing. Wolfgang Maass Liquid Computing Wolfgang Maass Institute for Theoretical Computer Science Technische Universitaet Graz A-8010 Graz, Austria maass@igi.tugraz.at http://www.igi.tugraz.at/ Abstract. This review addresses

More information

Processing of Time Series by Neural Circuits with Biologically Realistic Synaptic Dynamics

Processing of Time Series by Neural Circuits with Biologically Realistic Synaptic Dynamics Processing of Time Series by Neural Circuits with iologically Realistic Synaptic Dynamics Thomas Natschläger & Wolfgang Maass Institute for Theoretical Computer Science Technische Universität Graz, ustria

More information

REAL-TIME COMPUTING WITHOUT STABLE

REAL-TIME COMPUTING WITHOUT STABLE REAL-TIME COMPUTING WITHOUT STABLE STATES: A NEW FRAMEWORK FOR NEURAL COMPUTATION BASED ON PERTURBATIONS Wolfgang Maass Thomas Natschlager Henry Markram Presented by Qiong Zhao April 28 th, 2010 OUTLINE

More information

Real-Time Computing Without Stable States: A New Framework for Neural Computation Based on Perturbations

Real-Time Computing Without Stable States: A New Framework for Neural Computation Based on Perturbations ARTICLE Communicated by Rodney Douglas Real-Time Computing Without Stable States: A New Framework for Neural Computation Based on Perturbations Wolfgang Maass maass@igi.tu-graz.ac.at Thomas Natschläger

More information

Chapter 1. Liquid State Machines: Motivation, Theory, and Applications

Chapter 1. Liquid State Machines: Motivation, Theory, and Applications Chapter 1 Liquid State Machines: Motivation, Theory, and Applications Wolfgang Maass Institute for Theoretical Computer Science Graz University of Technology A-8010 Graz, Austria maass@igi.tugraz.at The

More information

18 Theory of the Computational Function of Microcircuit Dynamics

18 Theory of the Computational Function of Microcircuit Dynamics 18 Theory of the Computational Function of Microcircuit Dynamics W. MAASS 1 andh.markram 2 1 Institute for Theoretical Computer Science, Technische Universität Graz, 8010 Graz, Austria 2 Brain and Mind

More information

Computational Aspects of Feedback in Neural Circuits

Computational Aspects of Feedback in Neural Circuits Computational Aspects of Feedback in Neural Circuits Wolfgang Maass 1*, Prashant Joshi 1, Eduardo D. Sontag 2 1 Institute for Theoretical Computer Science, Technische Universitaet Graz, Graz, Austria,

More information

Movement Generation with Circuits of Spiking Neurons

Movement Generation with Circuits of Spiking Neurons no. 2953 Movement Generation with Circuits of Spiking Neurons Prashant Joshi, Wolfgang Maass Institute for Theoretical Computer Science Technische Universität Graz A-8010 Graz, Austria {joshi, maass}@igi.tugraz.at

More information

DEVS Simulation of Spiking Neural Networks

DEVS Simulation of Spiking Neural Networks DEVS Simulation of Spiking Neural Networks Rene Mayrhofer, Michael Affenzeller, Herbert Prähofer, Gerhard Höfer, Alexander Fried Institute of Systems Science Systems Theory and Information Technology Johannes

More information

Liquid Computing in a Simplified Model of Cortical Layer IV: Learning to Balance a Ball

Liquid Computing in a Simplified Model of Cortical Layer IV: Learning to Balance a Ball Liquid Computing in a Simplified Model of Cortical Layer IV: Learning to Balance a Ball Dimitri Probst 1,3, Wolfgang Maass 2, Henry Markram 1, and Marc-Oliver Gewaltig 1 1 Blue Brain Project, École Polytechnique

More information

An Introductory Course in Computational Neuroscience

An Introductory Course in Computational Neuroscience An Introductory Course in Computational Neuroscience Contents Series Foreword Acknowledgments Preface 1 Preliminary Material 1.1. Introduction 1.1.1 The Cell, the Circuit, and the Brain 1.1.2 Physics of

More information

Consider the following spike trains from two different neurons N1 and N2:

Consider the following spike trains from two different neurons N1 and N2: About synchrony and oscillations So far, our discussions have assumed that we are either observing a single neuron at a, or that neurons fire independent of each other. This assumption may be correct in

More information

Linking non-binned spike train kernels to several existing spike train metrics

Linking non-binned spike train kernels to several existing spike train metrics Linking non-binned spike train kernels to several existing spike train metrics Benjamin Schrauwen Jan Van Campenhout ELIS, Ghent University, Belgium Benjamin.Schrauwen@UGent.be Abstract. This work presents

More information

Short Term Memory and Pattern Matching with Simple Echo State Networks

Short Term Memory and Pattern Matching with Simple Echo State Networks Short Term Memory and Pattern Matching with Simple Echo State Networks Georg Fette (fette@in.tum.de), Julian Eggert (julian.eggert@honda-ri.de) Technische Universität München; Boltzmannstr. 3, 85748 Garching/München,

More information

Artificial Neural Networks Examination, June 2005

Artificial Neural Networks Examination, June 2005 Artificial Neural Networks Examination, June 2005 Instructions There are SIXTY questions. (The pass mark is 30 out of 60). For each question, please select a maximum of ONE of the given answers (either

More information

A gradient descent rule for spiking neurons emitting multiple spikes

A gradient descent rule for spiking neurons emitting multiple spikes A gradient descent rule for spiking neurons emitting multiple spikes Olaf Booij a, Hieu tat Nguyen a a Intelligent Sensory Information Systems, University of Amsterdam, Faculty of Science, Kruislaan 403,

More information

1 What makes a dynamical system computationally powerful?

1 What makes a dynamical system computationally powerful? 1 What makes a dynamical system computationally powerful? Robert Legenstein and Wolfgang Maass Institute for Theoretical Computer Science Technische Universität Graz Austria, 8010 Graz {legi, maass}@igi.tugraz.at

More information

Exercises. Chapter 1. of τ approx that produces the most accurate estimate for this firing pattern.

Exercises. Chapter 1. of τ approx that produces the most accurate estimate for this firing pattern. 1 Exercises Chapter 1 1. Generate spike sequences with a constant firing rate r 0 using a Poisson spike generator. Then, add a refractory period to the model by allowing the firing rate r(t) to depend

More information

Temporal Pattern Analysis

Temporal Pattern Analysis LIACS Leiden Institute of Advanced Computer Science Master s Thesis June 17, 29 Temporal Pattern Analysis Using Reservoir Computing Author: Ron Vink Supervisor: Dr. Walter Kosters 1 Contents 1 Introduction

More information

A Learning Theory for Reward-Modulated Spike-Timing-Dependent Plasticity with Application to Biofeedback

A Learning Theory for Reward-Modulated Spike-Timing-Dependent Plasticity with Application to Biofeedback A Learning Theory for Reward-Modulated Spike-Timing-Dependent Plasticity with Application to Biofeedback Robert Legenstein, Dejan Pecevski, Wolfgang Maass Institute for Theoretical Computer Science Graz

More information

At the Edge of Chaos: Real-time Computations and Self-Organized Criticality in Recurrent Neural Networks

At the Edge of Chaos: Real-time Computations and Self-Organized Criticality in Recurrent Neural Networks At the Edge of Chaos: Real-time Computations and Self-Organized Criticality in Recurrent Neural Networks Thomas Natschläger Software Competence Center Hagenberg A-4232 Hagenberg, Austria Thomas.Natschlaeger@scch.at

More information

Probabilistic Models in Theoretical Neuroscience

Probabilistic Models in Theoretical Neuroscience Probabilistic Models in Theoretical Neuroscience visible unit Boltzmann machine semi-restricted Boltzmann machine restricted Boltzmann machine hidden unit Neural models of probabilistic sampling: introduction

More information

In the Name of God. Lecture 9: ANN Architectures

In the Name of God. Lecture 9: ANN Architectures In the Name of God Lecture 9: ANN Architectures Biological Neuron Organization of Levels in Brains Central Nervous sys Interregional circuits Local circuits Neurons Dendrite tree map into cerebral cortex,

More information

Structured reservoir computing with spatiotemporal chaotic attractors

Structured reservoir computing with spatiotemporal chaotic attractors Structured reservoir computing with spatiotemporal chaotic attractors Carlos Lourenço 1,2 1- Faculty of Sciences of the University of Lisbon - Informatics Department Campo Grande, 1749-016 Lisboa - Portugal

More information

Emergence of resonances in neural systems: the interplay between adaptive threshold and short-term synaptic plasticity

Emergence of resonances in neural systems: the interplay between adaptive threshold and short-term synaptic plasticity Emergence of resonances in neural systems: the interplay between adaptive threshold and short-term synaptic plasticity Jorge F. Mejias 1,2 and Joaquín J. Torres 2 1 Department of Physics and Center for

More information

How to read a burst duration code

How to read a burst duration code Neurocomputing 58 60 (2004) 1 6 www.elsevier.com/locate/neucom How to read a burst duration code Adam Kepecs a;, John Lisman b a Cold Spring Harbor Laboratory, Marks Building, 1 Bungtown Road, Cold Spring

More information

Applied Physics and by courtesy, Neurobiology and EE

Applied Physics and by courtesy, Neurobiology and EE Information theoretic limits on the memory capacity of neuronal and synaptic networks. Surya Ganguli Applied Physics and by courtesy, Neurobiology and EE Stanford The Nature and Scope of Theoretical Neuroscience

More information

Data Mining Part 5. Prediction

Data Mining Part 5. Prediction Data Mining Part 5. Prediction 5.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline How the Brain Works Artificial Neural Networks Simple Computing Elements Feed-Forward Networks Perceptrons (Single-layer,

More information

Supporting Online Material for

Supporting Online Material for www.sciencemag.org/cgi/content/full/319/5869/1543/dc1 Supporting Online Material for Synaptic Theory of Working Memory Gianluigi Mongillo, Omri Barak, Misha Tsodyks* *To whom correspondence should be addressed.

More information

Noise as a Resource for Computation and Learning in Networks of Spiking Neurons

Noise as a Resource for Computation and Learning in Networks of Spiking Neurons INVITED PAPER Noise as a Resource for Computation and Learning in Networks of Spiking Neurons This paper discusses biologically inspired machine learning methods based on theories about how the brain exploits

More information

Sampling-based probabilistic inference through neural and synaptic dynamics

Sampling-based probabilistic inference through neural and synaptic dynamics Sampling-based probabilistic inference through neural and synaptic dynamics Wolfgang Maass for Robert Legenstein Institute for Theoretical Computer Science Graz University of Technology, Austria Institute

More information

How to do backpropagation in a brain

How to do backpropagation in a brain How to do backpropagation in a brain Geoffrey Hinton Canadian Institute for Advanced Research & University of Toronto & Google Inc. Prelude I will start with three slides explaining a popular type of deep

More information

This document is downloaded from DR-NTU, Nanyang Technological University Library, Singapore.

This document is downloaded from DR-NTU, Nanyang Technological University Library, Singapore. This document is downloaded from DR-NTU, Nanyang Technological University Library, Singapore. Title Liquid state machine with dendritically enhanced readout for low-power, neuromorphic VLSI implementations

More information

Computing with Inter-spike Interval Codes in Networks of Integrate and Fire Neurons

Computing with Inter-spike Interval Codes in Networks of Integrate and Fire Neurons Computing with Inter-spike Interval Codes in Networks of Integrate and Fire Neurons Dileep George a,b Friedrich T. Sommer b a Dept. of Electrical Engineering, Stanford University 350 Serra Mall, Stanford,

More information

Efficient temporal processing with biologically realistic dynamic synapses

Efficient temporal processing with biologically realistic dynamic synapses INSTITUTE OF PHYSICS PUBLISHING NETWORK: COMPUTATION IN NEURAL SYSTEMS Network: Comput. Neural Syst. 12 (21) 75 87 www.iop.org/journals/ne PII: S954-898X(1)2361-3 Efficient temporal processing with biologically

More information

Neural Networks. Mark van Rossum. January 15, School of Informatics, University of Edinburgh 1 / 28

Neural Networks. Mark van Rossum. January 15, School of Informatics, University of Edinburgh 1 / 28 1 / 28 Neural Networks Mark van Rossum School of Informatics, University of Edinburgh January 15, 2018 2 / 28 Goals: Understand how (recurrent) networks behave Find a way to teach networks to do a certain

More information

Introduction to Neural Networks

Introduction to Neural Networks Introduction to Neural Networks What are (Artificial) Neural Networks? Models of the brain and nervous system Highly parallel Process information much more like the brain than a serial computer Learning

More information

ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92

ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92 ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92 BIOLOGICAL INSPIRATIONS Some numbers The human brain contains about 10 billion nerve cells (neurons) Each neuron is connected to the others through 10000

More information

Annales UMCS Informatica AI 1 (2003) UMCS. Liquid state machine built of Hodgkin-Huxley neurons pattern recognition and informational entropy

Annales UMCS Informatica AI 1 (2003) UMCS. Liquid state machine built of Hodgkin-Huxley neurons pattern recognition and informational entropy Annales UMC Informatica AI 1 (2003) 107-113 Annales UMC Informatica Lublin-Polonia ectio AI http://www.annales.umcs.lublin.pl/ Liquid state machine built of Hodgkin-Huxley neurons pattern recognition and

More information

Tuning tuning curves. So far: Receptive fields Representation of stimuli Population vectors. Today: Contrast enhancment, cortical processing

Tuning tuning curves. So far: Receptive fields Representation of stimuli Population vectors. Today: Contrast enhancment, cortical processing Tuning tuning curves So far: Receptive fields Representation of stimuli Population vectors Today: Contrast enhancment, cortical processing Firing frequency N 3 s max (N 1 ) = 40 o N4 N 1 N N 5 2 s max

More information

arxiv: v1 [cs.et] 20 Nov 2014

arxiv: v1 [cs.et] 20 Nov 2014 Liquid State Machine with Dendritically Enhanced Readout for Low-power, Neuromorphic VLSI Implementations Subhrajit Roy, Student Member, IEEE, Amitava Banerjee and Arindam Basu, Member, IEEE arxiv:4.5458v

More information

Modelling stochastic neural learning

Modelling stochastic neural learning Modelling stochastic neural learning Computational Neuroscience András Telcs telcs.andras@wigner.mta.hu www.cs.bme.hu/~telcs http://pattern.wigner.mta.hu/participants/andras-telcs Compiled from lectures

More information

Decision-making and Weber s law: a neurophysiological model

Decision-making and Weber s law: a neurophysiological model European Journal of Neuroscience, Vol. 24, pp. 901 916, 2006 doi:10.1111/j.14-9568.2006.04940.x Decision-making and Weber s law: a neurophysiological model Gustavo Deco 1 and Edmund T. Rolls 2 1 Institucio

More information

Dynamical Constraints on Computing with Spike Timing in the Cortex

Dynamical Constraints on Computing with Spike Timing in the Cortex Appears in Advances in Neural Information Processing Systems, 15 (NIPS 00) Dynamical Constraints on Computing with Spike Timing in the Cortex Arunava Banerjee and Alexandre Pouget Department of Brain and

More information

How do synapses transform inputs?

How do synapses transform inputs? Neurons to networks How do synapses transform inputs? Excitatory synapse Input spike! Neurotransmitter release binds to/opens Na channels Change in synaptic conductance! Na+ influx E.g. AMA synapse! Depolarization

More information

Artificial Neural Networks Examination, June 2004

Artificial Neural Networks Examination, June 2004 Artificial Neural Networks Examination, June 2004 Instructions There are SIXTY questions (worth up to 60 marks). The exam mark (maximum 60) will be added to the mark obtained in the laborations (maximum

More information

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others) Machine Learning Neural Networks (slides from Domingos, Pardo, others) For this week, Reading Chapter 4: Neural Networks (Mitchell, 1997) See Canvas For subsequent weeks: Scaling Learning Algorithms toward

More information

Neural Networks. Textbook. Other Textbooks and Books. Course Info. (click on

Neural Networks. Textbook. Other Textbooks and Books. Course Info.  (click on 636-600 Neural Networks Textbook Instructor: Yoonsuck Choe Contact info: HRBB 322B, 45-5466, choe@tamu.edu Web page: http://faculty.cs.tamu.edu/choe Simon Haykin, Neural Networks: A Comprehensive Foundation,

More information

Artificial Neural Networks. Q550: Models in Cognitive Science Lecture 5

Artificial Neural Networks. Q550: Models in Cognitive Science Lecture 5 Artificial Neural Networks Q550: Models in Cognitive Science Lecture 5 "Intelligence is 10 million rules." --Doug Lenat The human brain has about 100 billion neurons. With an estimated average of one thousand

More information

Recurrence Enhances the Spatial Encoding of Static Inputs in Reservoir Networks

Recurrence Enhances the Spatial Encoding of Static Inputs in Reservoir Networks Recurrence Enhances the Spatial Encoding of Static Inputs in Reservoir Networks Christian Emmerich, R. Felix Reinhart, and Jochen J. Steil Research Institute for Cognition and Robotics (CoR-Lab), Bielefeld

More information

On the Computational Complexity of Networks of Spiking Neurons

On the Computational Complexity of Networks of Spiking Neurons On the Computational Complexity of Networks of Spiking Neurons (Extended Abstract) Wolfgang Maass Institute for Theoretical Computer Science Technische Universitaet Graz A-80lO Graz, Austria e-mail: maass@igi.tu-graz.ac.at

More information

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others) Machine Learning Neural Networks (slides from Domingos, Pardo, others) Human Brain Neurons Input-Output Transformation Input Spikes Output Spike Spike (= a brief pulse) (Excitatory Post-Synaptic Potential)

More information

Towards a theoretical foundation for morphological computation with compliant bodies

Towards a theoretical foundation for morphological computation with compliant bodies Biol Cybern (211) 15:355 37 DOI 1.17/s422-12-471- ORIGINAL PAPER Towards a theoretical foundation for morphological computation with compliant bodies Helmut Hauser Auke J. Ijspeert Rudolf M. Füchslin Rolf

More information

Model of a Biological Neuron as a Temporal Neural Network

Model of a Biological Neuron as a Temporal Neural Network Model of a Biological Neuron as a Temporal Neural Network Sean D. Murphy and Edward W. Kairiss Interdepartmental Neuroscience Program, Department of Psychology, and The Center for Theoretical and Applied

More information

Machine Learning. Neural Networks

Machine Learning. Neural Networks Machine Learning Neural Networks Bryan Pardo, Northwestern University, Machine Learning EECS 349 Fall 2007 Biological Analogy Bryan Pardo, Northwestern University, Machine Learning EECS 349 Fall 2007 THE

More information

Biological Modeling of Neural Networks:

Biological Modeling of Neural Networks: Week 14 Dynamics and Plasticity 14.1 Reservoir computing - Review:Random Networks - Computing with rich dynamics Biological Modeling of Neural Networks: 14.2 Random Networks - stationary state - chaos

More information

Dynamic Stochastic Synapses as Computational Units

Dynamic Stochastic Synapses as Computational Units LETTER Communicated by Laurence Abbott Dynamic Stochastic Synapses as Computational Units Wolfgang Maass Institute for Theoretical Computer Science, Technische Universität Graz, A 8010 Graz, Austria Anthony

More information

Reservoir Computing with Stochastic Bitstream Neurons

Reservoir Computing with Stochastic Bitstream Neurons Reservoir Computing with Stochastic Bitstream Neurons David Verstraeten, Benjamin Schrauwen and Dirk Stroobandt Department of Electronics and Information Systems (ELIS), Ugent {david.verstraeten, benjamin.schrauwen,

More information

This script will produce a series of pulses of amplitude 40 na, duration 1ms, recurring every 50 ms.

This script will produce a series of pulses of amplitude 40 na, duration 1ms, recurring every 50 ms. 9.16 Problem Set #4 In the final problem set you will combine the pieces of knowledge gained in the previous assignments to build a full-blown model of a plastic synapse. You will investigate the effects

More information

Neural Networks Introduction

Neural Networks Introduction Neural Networks Introduction H.A Talebi Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Winter 2011 H. A. Talebi, Farzaneh Abdollahi Neural Networks 1/22 Biological

More information

Spike-Frequency Adaptation: Phenomenological Model and Experimental Tests

Spike-Frequency Adaptation: Phenomenological Model and Experimental Tests Spike-Frequency Adaptation: Phenomenological Model and Experimental Tests J. Benda, M. Bethge, M. Hennig, K. Pawelzik & A.V.M. Herz February, 7 Abstract Spike-frequency adaptation is a common feature of

More information

CMSC 421: Neural Computation. Applications of Neural Networks

CMSC 421: Neural Computation. Applications of Neural Networks CMSC 42: Neural Computation definition synonyms neural networks artificial neural networks neural modeling connectionist models parallel distributed processing AI perspective Applications of Neural Networks

More information

Adaptation in the Neural Code of the Retina

Adaptation in the Neural Code of the Retina Adaptation in the Neural Code of the Retina Lens Retina Fovea Optic Nerve Optic Nerve Bottleneck Neurons Information Receptors: 108 95% Optic Nerve 106 5% After Polyak 1941 Visual Cortex ~1010 Mean Intensity

More information

Math in systems neuroscience. Quan Wen

Math in systems neuroscience. Quan Wen Math in systems neuroscience Quan Wen Human brain is perhaps the most complex subject in the universe 1 kg brain 10 11 neurons 180,000 km nerve fiber 10 15 synapses 10 18 synaptic proteins Multiscale

More information

Nonlinear reverse-correlation with synthesized naturalistic noise

Nonlinear reverse-correlation with synthesized naturalistic noise Cognitive Science Online, Vol1, pp1 7, 2003 http://cogsci-onlineucsdedu Nonlinear reverse-correlation with synthesized naturalistic noise Hsin-Hao Yu Department of Cognitive Science University of California

More information

Synaptic plasticity in neuromorphic hardware. Stefano Fusi Columbia University

Synaptic plasticity in neuromorphic hardware. Stefano Fusi Columbia University Synaptic plasticity in neuromorphic hardware Stefano Fusi Columbia University The memory problem Several efficient memory models assume that the synaptic dynamic variables are unbounded, or can be modified

More information

Subthreshold cross-correlations between cortical neurons: Areference model with static synapses

Subthreshold cross-correlations between cortical neurons: Areference model with static synapses Neurocomputing 65 66 (25) 685 69 www.elsevier.com/locate/neucom Subthreshold cross-correlations between cortical neurons: Areference model with static synapses Ofer Melamed a,b, Gilad Silberberg b, Henry

More information

Fundamentals of Computational Neuroscience 2e

Fundamentals of Computational Neuroscience 2e Fundamentals of Computational Neuroscience 2e January 1, 2010 Chapter 10: The cognitive brain Hierarchical maps and attentive vision A. Ventral visual pathway B. Layered cortical maps Receptive field size

More information

Refutation of Second Reviewer's Objections

Refutation of Second Reviewer's Objections Re: Submission to Science, "Harnessing nonlinearity: predicting chaotic systems and boosting wireless communication." (Ref: 1091277) Refutation of Second Reviewer's Objections Herbert Jaeger, Dec. 23,

More information

Brains and Computation

Brains and Computation 15-883: Computational Models of Neural Systems Lecture 1.1: Brains and Computation David S. Touretzky Computer Science Department Carnegie Mellon University 1 Models of the Nervous System Hydraulic network

More information

International University Bremen Guided Research Proposal Improve on chaotic time series prediction using MLPs for output training

International University Bremen Guided Research Proposal Improve on chaotic time series prediction using MLPs for output training International University Bremen Guided Research Proposal Improve on chaotic time series prediction using MLPs for output training Aakash Jain a.jain@iu-bremen.de Spring Semester 2004 1 Executive Summary

More information

Mid Year Project Report: Statistical models of visual neurons

Mid Year Project Report: Statistical models of visual neurons Mid Year Project Report: Statistical models of visual neurons Anna Sotnikova asotniko@math.umd.edu Project Advisor: Prof. Daniel A. Butts dab@umd.edu Department of Biology Abstract Studying visual neurons

More information

Lecture 4: Feed Forward Neural Networks

Lecture 4: Feed Forward Neural Networks Lecture 4: Feed Forward Neural Networks Dr. Roman V Belavkin Middlesex University BIS4435 Biological neurons and the brain A Model of A Single Neuron Neurons as data-driven models Neural Networks Training

More information

Neural networks. Chapter 19, Sections 1 5 1

Neural networks. Chapter 19, Sections 1 5 1 Neural networks Chapter 19, Sections 1 5 Chapter 19, Sections 1 5 1 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural networks Chapter 19, Sections 1 5 2 Brains 10

More information

The homogeneous Poisson process

The homogeneous Poisson process The homogeneous Poisson process during very short time interval Δt there is a fixed probability of an event (spike) occurring independent of what happened previously if r is the rate of the Poisson process,

More information

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann (Feed-Forward) Neural Networks 2016-12-06 Dr. Hajira Jabeen, Prof. Jens Lehmann Outline In the previous lectures we have learned about tensors and factorization methods. RESCAL is a bilinear model for

More information

Bayesian Computation Emerges in Generic Cortical Microcircuits through Spike-Timing-Dependent Plasticity

Bayesian Computation Emerges in Generic Cortical Microcircuits through Spike-Timing-Dependent Plasticity Bayesian Computation Emerges in Generic Cortical Microcircuits through Spike-Timing-Dependent Plasticity Bernhard Nessler 1 *, Michael Pfeiffer 1,2, Lars Buesing 1, Wolfgang Maass 1 1 Institute for Theoretical

More information

Sparse Coding as a Generative Model

Sparse Coding as a Generative Model Sparse Coding as a Generative Model image vector neural activity (sparse) feature vector other stuff Find activations by descending E Coefficients via gradient descent Driving input (excitation) Lateral

More information

Artificial Neural Networks Examination, March 2004

Artificial Neural Networks Examination, March 2004 Artificial Neural Networks Examination, March 2004 Instructions There are SIXTY questions (worth up to 60 marks). The exam mark (maximum 60) will be added to the mark obtained in the laborations (maximum

More information

Introduction To Artificial Neural Networks

Introduction To Artificial Neural Networks Introduction To Artificial Neural Networks Machine Learning Supervised circle square circle square Unsupervised group these into two categories Supervised Machine Learning Supervised Machine Learning Supervised

More information

Neural networks. Chapter 20, Section 5 1

Neural networks. Chapter 20, Section 5 1 Neural networks Chapter 20, Section 5 Chapter 20, Section 5 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural networks Chapter 20, Section 5 2 Brains 0 neurons of

More information

EE04 804(B) Soft Computing Ver. 1.2 Class 2. Neural Networks - I Feb 23, Sasidharan Sreedharan

EE04 804(B) Soft Computing Ver. 1.2 Class 2. Neural Networks - I Feb 23, Sasidharan Sreedharan EE04 804(B) Soft Computing Ver. 1.2 Class 2. Neural Networks - I Feb 23, 2012 Sasidharan Sreedharan www.sasidharan.webs.com 3/1/2012 1 Syllabus Artificial Intelligence Systems- Neural Networks, fuzzy logic,

More information

A First Attempt of Reservoir Pruning for Classification Problems

A First Attempt of Reservoir Pruning for Classification Problems A First Attempt of Reservoir Pruning for Classification Problems Xavier Dutoit, Hendrik Van Brussel, Marnix Nuttin Katholieke Universiteit Leuven - P.M.A. Celestijnenlaan 300b, 3000 Leuven - Belgium Abstract.

More information

Infinite systems of interacting chains with memory of variable length - a stochastic model for biological neural nets

Infinite systems of interacting chains with memory of variable length - a stochastic model for biological neural nets Infinite systems of interacting chains with memory of variable length - a stochastic model for biological neural nets Antonio Galves Universidade de S.Paulo Fapesp Center for Neuromathematics Eurandom,

More information

Artificial Neural Network and Fuzzy Logic

Artificial Neural Network and Fuzzy Logic Artificial Neural Network and Fuzzy Logic 1 Syllabus 2 Syllabus 3 Books 1. Artificial Neural Networks by B. Yagnanarayan, PHI - (Cover Topologies part of unit 1 and All part of Unit 2) 2. Neural Networks

More information

Neural Networks. Chapter 18, Section 7. TB Artificial Intelligence. Slides from AIMA 1/ 21

Neural Networks. Chapter 18, Section 7. TB Artificial Intelligence. Slides from AIMA   1/ 21 Neural Networks Chapter 8, Section 7 TB Artificial Intelligence Slides from AIMA http://aima.cs.berkeley.edu / 2 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural

More information

Synaptic Input. Linear Model of Synaptic Transmission. Professor David Heeger. September 5, 2000

Synaptic Input. Linear Model of Synaptic Transmission. Professor David Heeger. September 5, 2000 Synaptic Input Professor David Heeger September 5, 2000 The purpose of this handout is to go a bit beyond the discussion in Ch. 6 of The Book of Genesis on synaptic input, and give some examples of how

More information

Short Term Memory Quantifications in Input-Driven Linear Dynamical Systems

Short Term Memory Quantifications in Input-Driven Linear Dynamical Systems Short Term Memory Quantifications in Input-Driven Linear Dynamical Systems Peter Tiňo and Ali Rodan School of Computer Science, The University of Birmingham Birmingham B15 2TT, United Kingdom E-mail: {P.Tino,

More information

Edge of Chaos Computation in Mixed-Mode VLSI - A Hard Liquid

Edge of Chaos Computation in Mixed-Mode VLSI - A Hard Liquid Edge of Chaos Computation in Mixed-Mode VLSI - A Hard Liquid Felix Schürmann, Karlheinz Meier, Johannes Schemmel Kirchhoff Institute for Physics University of Heidelberg Im Neuenheimer Feld 227, 6912 Heidelberg,

More information

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others) Machine Learning Neural Networks (slides from Domingos, Pardo, others) For this week, Reading Chapter 4: Neural Networks (Mitchell, 1997) See Canvas For subsequent weeks: Scaling Learning Algorithms toward

More information

Several ways to solve the MSO problem

Several ways to solve the MSO problem Several ways to solve the MSO problem J. J. Steil - Bielefeld University - Neuroinformatics Group P.O.-Box 0 0 3, D-3350 Bielefeld - Germany Abstract. The so called MSO-problem, a simple superposition

More information

Fast neural network simulations with population density methods

Fast neural network simulations with population density methods Fast neural network simulations with population density methods Duane Q. Nykamp a,1 Daniel Tranchina b,a,c,2 a Courant Institute of Mathematical Science b Department of Biology c Center for Neural Science

More information

Convolutional Associative Memory: FIR Filter Model of Synapse

Convolutional Associative Memory: FIR Filter Model of Synapse Convolutional Associative Memory: FIR Filter Model of Synapse Rama Murthy Garimella 1, Sai Dileep Munugoti 2, Anil Rayala 1 1 International Institute of Information technology, Hyderabad, India. rammurthy@iiit.ac.in,

More information

Using reservoir computing in a decomposition approach for time series prediction.

Using reservoir computing in a decomposition approach for time series prediction. Using reservoir computing in a decomposition approach for time series prediction. Francis wyffels, Benjamin Schrauwen and Dirk Stroobandt Ghent University - Electronics and Information Systems Department

More information

Signal, donnée, information dans les circuits de nos cerveaux

Signal, donnée, information dans les circuits de nos cerveaux NeuroSTIC Brest 5 octobre 2017 Signal, donnée, information dans les circuits de nos cerveaux Claude Berrou Signal, data, information: in the field of telecommunication, everything is clear It is much less

More information

Reservoir Computing and Echo State Networks

Reservoir Computing and Echo State Networks An Introduction to: Reservoir Computing and Echo State Networks Claudio Gallicchio gallicch@di.unipi.it Outline Focus: Supervised learning in domain of sequences Recurrent Neural networks for supervised

More information

Biological Modeling of Neural Networks

Biological Modeling of Neural Networks Week 4 part 2: More Detail compartmental models Biological Modeling of Neural Networks Week 4 Reducing detail - Adding detail 4.2. Adding detail - apse -cable equat Wulfram Gerstner EPFL, Lausanne, Switzerland

More information

Systems Biology: A Personal View IX. Landscapes. Sitabhra Sinha IMSc Chennai

Systems Biology: A Personal View IX. Landscapes. Sitabhra Sinha IMSc Chennai Systems Biology: A Personal View IX. Landscapes Sitabhra Sinha IMSc Chennai Fitness Landscapes Sewall Wright pioneered the description of how genotype or phenotypic fitness are related in terms of a fitness

More information