How to build a brain. Cognitive Modelling. Terry & Chris Centre for Theoretical Neuroscience University of Waterloo

Similar documents
A biologically realistic cleanup memory: Autoassociation in spiking neurons

The Ordinal Serial Encoding Model: Serial Memory in Spiking Neurons

Fundamentals of Computational Neuroscience 2e

Neural Computation, Analogical Promiscuity, and the Induction of Semantic Roles: A Preliminary Sketch

Representing structured relational data in Euclidean vector spaces. Tony Plate

The Bayesian Brain. Robert Jacobs Department of Brain & Cognitive Sciences University of Rochester. May 11, 2017

Fundamentals of Computational Neuroscience 2e

Hierarchy. Will Penny. 24th March Hierarchy. Will Penny. Linear Models. Convergence. Nonlinear Models. References

A Spiking Independent Accumulator Model for Winner-Take-All Computation

Dynamical Cognition 2010: New Approaches to Some Tough Old Problems. Simon D. Levy Washington & Lee University Lexington, Virginia, USA

Computational Models of Human Cognition

A thesis submitted to the. School of Computing. in conformity with the requirements for. the degree of Master of Science. Queen s University

Tuning tuning curves. So far: Receptive fields Representation of stimuli Population vectors. Today: Contrast enhancment, cortical processing

Representing Objects, Relations, and Sequences

Geometric Analogue of Holographic Reduced Representations

Patterns, Memory and Periodicity in Two-Neuron Delayed Recurrent Inhibitory Loops

Large Patterns Make Great Symbols: An Example of Learning from Example

Brains and Computation

Artificial Neural Networks D B M G. Data Base and Data Mining Group of Politecnico di Torino. Elena Baralis. Politecnico di Torino

Experimental design of fmri studies

Modelling stochastic neural learning

Balance of Electric and Diffusion Forces

CSE/NB 528 Final Lecture: All Good Things Must. CSE/NB 528: Final Lecture

Quantum Computation via Sparse Distributed Representation

Experimental design of fmri studies

Basal Ganglia. Input: Cortex Output: Thalamus motor and sensory gajng. Striatum Caudate Nucleus Putamen Globus Pallidus

Experimental design of fmri studies & Resting-State fmri

Artificial Neural Networks. Introduction to Computational Neuroscience Tambet Matiisen

Artificial Intelligence

Experimental design of fmri studies

Biological Modeling of Neural Networks

Lecture 5: Recurrent Neural Networks

arxiv:cs.ai/ v2 23 Oct 2006

Leo Kadanoff and 2d XY Models with Symmetry-Breaking Fields. renormalization group study of higher order gradients, cosines and vortices

Decoding conceptual representations

Machine Learning for Large-Scale Data Analysis and Decision Making A. Neural Networks Week #6

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann

Visual motion processing and perceptual decision making

Neuron. Detector Model. Understanding Neural Components in Detector Model. Detector vs. Computer. Detector. Neuron. output. axon

Convolutional Dictionary Learning and Feature Design

Signal, donnée, information dans les circuits de nos cerveaux

Artificial Neural Network and Fuzzy Logic

Left over from units..

CSC321 Lecture 16: ResNets and Attention

How do synapses transform inputs?

Neural networks. Chapter 20. Chapter 20 1

Computational Explorations in Cognitive Neuroscience Chapter 2

Probabilistic Models in Theoretical Neuroscience

To cognize is to categorize revisited: Category Theory is where Mathematics meets Biology

A MEAN FIELD THEORY OF LAYER IV OF VISUAL CORTEX AND ITS APPLICATION TO ARTIFICIAL NEURAL NETWORKS*

3 Detector vs. Computer

Proceedings of Neural, Parallel, and Scientific Computations 4 (2010) xx-xx PHASE OSCILLATOR NETWORK WITH PIECEWISE-LINEAR DYNAMICS

Flexible Gating of Contextual Influences in Natural Vision. Odelia Schwartz University of Miami Oct 2015

arxiv: v1 [cs.ai] 30 Mar 2010

Sample Exam COMP 9444 NEURAL NETWORKS Solutions

Fast neural network simulations with population density methods

CISC 3250 Systems Neuroscience

COMP 546. Lecture 21. Cochlea to brain, Source Localization. Tues. April 3, 2018

Nervous Tissue. Neurons Electrochemical Gradient Propagation & Transduction Neurotransmitters Temporal & Spatial Summation

Lateral organization & computation

CSCI 315: Artificial Intelligence through Deep Learning

Nervous Tissue. Neurons Neural communication Nervous Systems

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Neural networks. Chapter 19, Sections 1 5 1

Long-Short Term Memory and Other Gated RNNs

Convolutional neural networks

Knowledge Extraction from DBNs for Images

Structured reservoir computing with spatiotemporal chaotic attractors

Artificial Neural Networks. Q550: Models in Cognitive Science Lecture 5

10/2/2015. Chapter 4. Determination and Differentiation. Neuroanatomical Diversity

CSCI 252: Neural Networks and Graphical Models. Fall Term 2016 Prof. Levy. Architecture #7: The Simple Recurrent Network (Elman 1990)

EE-559 Deep learning Recurrent Neural Networks

Dynamical Constraints on Computing with Spike Timing in the Cortex

Lecture 4: Feed Forward Neural Networks

Modularity Matters: Learning Invariant Relational Reasoning Tasks

Spiking and saturating dendrites differentially expand single neuron computation capacity.

Lecture 8: Introduction to Deep Learning: Part 2 (More on backpropagation, and ConvNets)

Ângelo Cardoso 27 May, Symbolic and Sub-Symbolic Learning Course Instituto Superior Técnico

Convolutional Neural Network Architecture

SPIKE TRIGGERED APPROACHES. Odelia Schwartz Computational Neuroscience Course 2017

Cortical neural networks: light-microscopy-based anatomical reconstruction, numerical simulation and analysis

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

I-theory on depth vs width: hierarchical function composition

Where linguistic meaning meets non-linguistic cognition

Deep Feedforward Networks

A Three-dimensional Physiologically Realistic Model of the Retina

Machine Learning. Neural Networks

CMSC 421: Neural Computation. Applications of Neural Networks

CORRELATION TRANSFER FROM BASAL GANGLIA TO THALAMUS IN PARKINSON S DISEASE. by Pamela Reitsma. B.S., University of Maine, 2007

Neural networks. Chapter 20, Section 5 1

Experimental design of fmri studies

An Introductory Course in Computational Neuroscience

Multilayer and dynamic networks

COMP304 Introduction to Neural Networks based on slides by:

Neural Networks. Chapter 18, Section 7. TB Artificial Intelligence. Slides from AIMA 1/ 21

Resource Allocation in the Brain

Neural Networks 2. 2 Receptive fields and dealing with image inputs

Chapter 1. Introduction

Deep Feedforward Networks. Sargur N. Srihari

@SoyGema GEMA PARREÑO PIQUERAS

Transcription:

How to build a brain Cognitive Modelling Terry & Chris Centre for Theoretical Neuroscience University of Waterloo

So far... We have learned how to implement high-dimensional vector representations linear transformations nonlinear transformations recurrent, dynamic networks

Semantic pointer architecture The semantic pointer architecture uses these building blocks to construct cognitive models Three things to outline: Semantics Syntax Control

SPA: Semantics Semantic pointers are: Compressed, content-based addresses to information in association cortices Pointer because they are used to recall deep semantic information (content-based pointer) Semantic because they themselves define a surface semantic space

SPA: Semantics E.g. The pointer would be the activity of the top level of a standard hierarchical visual model for object recognition This pointer can then support symbol manipulation It can also be used to reactivate a full visual representation Serre et al., 2007 PNAS

S PA : S u r fa c e / d e e p s e m a n t i c s Applied to numbers: a) neuron tuning; b) generic SPs; c) input; d) reconstruction; e) surface semantics a) c) d) b) e)

SPA: Surface/deep semantics Solomon & Barsalou (2004) showed that false pairings that were lexically associated take longer to process (e.g. dog-card 100ms quicker than bananamonkey) Kan et al. (2003) fmri observed activation in perceptual systems only in the difficult cases Deep processing is not needed when a simple lexically-based strategy is sufficient to complete the task

SPA: Syntax Vector Symbolic Architectures (VSAs) Smolensky s Tensor Products Kanerva s Spatter Codes Gayler s Multiply, Add, Permute (MAP) method Plate s Holographic Reduced Representations (HRRs)

Structure representations All VSAs have some combination of 3 operations Multiply (bind) Add (compose) Hide (protect from other vectors) Chosen VSA: HRRs Constant vector length Protect and bind happen in 1 step Real valued

Structured representations HRRs (Plate, 1991; circular convolution) Circular convolution (binding) C = A B c j = n 1 k=0 a k b j k Circular correlation (unbinding) B A C b j = n 1 a k c j+k A C A C k=0

Circular convolution Circular convolution in the frequency space is piecewise multiplication: FFT(A B) =FFT(A).F F T (B) Must use complex numbers, where a b =(a 1 + a 2 i) (b 1 + b 2 i)

Neural implementation Note first A B = W IFFT (W FFT A.W FFT B) A B w ik = α k φk1...k N W FFT φ A i ω ik C ω jk ω lk w lk = α l φl W IFFT φ A.B k D A B

Circular convolution results

Circular convolution

Clean-up memory* To perform simple lexical processing, we need to ensure the result is a valid semantic pointer Because our chosen VSA is reduced, the output is typically not identical to a valid answer, so we need to clean up the results Elsewhere we have presented a fast spiking network solution to this (Stewart et al., 2010) Nengo includes an idealization of this to help build models

SPA: Control Two main control issues in the brain: Choosing the next best thing to do (action selection) Applying the action to control the flow of information (routing)

S PA : A c t i o n s e l e c t i o n The basal ganglia has been implicated in action selection tex cor thalamus striatum VL GPe CM /Pf GPi VA STN Transmitters Dopamine (modulatory) Glutamate (excitatory) GABA (inhibitory) SNr brainstem MD SNc

SPA: Action selection Cortex M t M Sel. Basal Ganglia Thalamus M compares the cortical state with known SPs M t maps selected action to cortical control states

SPA: Action selection* Timing predictions based on GABA neurotransmitter time constant (simple actions)

SPA: Control states Simple action selection isn t enough, we need to control the flow of information through the system A gating operation is ubiquitous (e.g. attention, sequencing, prioritizing, etc.) The controlled integrator is a simple 1D example (when A is zero and 1) We can add content to the control signal with a convolution network

SPA: Question answering Sentence: There is a red circle and blue triangle S = sentence + red circle + blue triangle What is red? Ans red S Transformation: make the red thing a square convert = square (red S) Ans convert S sentence + red square + blue triangle

SPA: Control states* Timing predictions based on GABA neurotransmitter time constant (complex actions)

Conclusion The SPA/NEF addresses several neurally realistic cognitive modelling challenges High-dimensional processing, control, syntax, semantics, statistical inference, relation to single cell models, network and cellular dynamics, etc. Scales very well (Stewart & Eliasmith, 2010) A nascent research approach that is flexible and unifying