Ângelo Cardoso 27 May, Symbolic and Sub-Symbolic Learning Course Instituto Superior Técnico

Size: px
Start display at page:

Download "Ângelo Cardoso 27 May, Symbolic and Sub-Symbolic Learning Course Instituto Superior Técnico"

Transcription

1 BIOLOGICALLY INSPIRED COMPUTER MODELS FOR VISUAL RECOGNITION Ângelo Cardoso 27 May, 2010 Symbolic and Sub-Symbolic Learning Course Instituto Superior Técnico

2 Index Human Vision Retinal Ganglion Cells Simple and Complex Cells Cell Columns and Layers Computational Models Neocognitron LISSOM HMAX

3 Human Vision Is it just raw sensory information? Our brain creates a mental picture of what it is seeing Tries to make sense of the raw inputs Optical Illusions illustrate some ways in which our brain makes interpretatations

4 Some Optical Illusions

5 Some Optical Illusions

6 Some Optical Illusions

7 Some Optical Illusions

8 Visual System Fig. D. Hubel and T. Wiesel 1981 Nobel Prize in Physiology or Medicine For their discoveries concerning information processing in the visual system Fig. Primary visual cortex location in humans Fig. Visual processing pathway *Section figures from Eye, Brain and Vision by D. Hubel

9 Retina Retina translates light into nerve signals Descriminates wavelength so we can see colors Retina is connected to the rest of the brain trough the optic nerve Retina s output is produced by the ganglion cells outputs which bundle in the optic nerve Two main types of ganglion cells on-center and off-center Fig. Left: on-center ganglion cells response and Right: off-center Fig. A cross section detail of the retina

10 Experiments Fig. Monkey visual cortex (1mm² area)

11 Retinal Ganglion Cells Receptive Fields Receptive field refers to an area on which the response of a cell depends and how that area should be stimulated to get a response. Deeper into the visual cortex these descriptions become more complex Neighbour ganglion cells have overlapping receptive fields Topographical organization is also present throughout the visual cortex Fig. Retinal Ganglion Cells Receptive Fields

12 Lateral Geniculate Body Lateral Geniculate Cells Receive input from retinal ganglion cells Have backward connections from the cerebral cortex Do not seem to make a profund transformation of information received from retinal ganglion cells It is hypotethized that they play a role in attention

13 Simple Cells Simple Cells Recognize patterns with a specific orientation and position Their receptive field size depends specially on their position regarding the retina area to which they react Even for a particular part of the retina their size can vary Fig. Simple Cell Response Fig. Three typical receptive fields for simple cells

14 Complex Cells Complex Cells Recognize patterns with a specific orientation but allow a positional shift. Most common type of cells in the primary visual cortex Some exhibit directional selectivity Fig. Complex Cell Fig. Complex Cell with Directional Selectivity

15 End-Stopping Usually simple and complex cells exihbit lenght summation Prolonguing their prefered stimulus (e.g. a line) over their receptive field makes no change in their response Some simple and complex cells are end-stopped Prolonguing their prefered stimulus over their receptive field diminishs their response Fig. End-Stopped Complex Cell

16 Orientation and Ocular-Dominance Columns Cells are topologically organized according to ocular dominance and orientation preference Fig. Recordings from an electroded moving Fig. Prefered orientation map in primary approximatelly parallel to the cortical surface visual cortex [Blasdel 86]

17 Hierarchy of layers The visual cortex is composed essentially as an hierarchy of cells Layers of simple and complex cells are arranged in a hierachical way The input of a layer is the output of the previous layer Fig. Visual pathway [nips.ac.jp]

18 Increasing Complexity Throughout the visual cortex there is a gradual increase in the complexity of the preferred stimulus The receptive field sizes and invariance properties also increase gradually Fig. Increasing Complexity in prefered stimulus Fig. Receptive fields from a region including V4 [Kobatake et al. 94] and IT [Kobatake et al. 94]

19 Overcomplete dictionaries of features Complete dictionaries (Small and Independent) e.g. represent a sentence with several words Overcomplete dictionaries (Large and Dependent) e.g. represent a sentence with one or very few words Neurons in the infero temporal cortex may be tuned to overcomplete dictionaries [Tanaka et al. 96]

20 Topological Data Non-topological data The position of the parts of a pattern does not contain information e.g information about one person [1,80m, 75 kg, 40 years] is the same as [75kg, 40 years, 1,80m]. Meaning doesn t depend on order Topological data The position of the parts of pattern contains information e.g. a word building is different from gnidbuil Problems of fully-connected networks for topological data The information implicit in the data topology is ignored No built-in invariance to shifts and distortion Overfitting High computational Costs

21 Invariance vs. Specifitiy We can recognize a specific face despice changes in viewpoint, scale, ilumination or expression Our vision is invariant to all these image variations recognizing objects indepently of these conditions Our vision has high specifity being face recognition the utterly example of such How do we achieve these two rather opposite properties? Fig. Samples from a Face Dataset [AT&T Laboratories Cambridge]

22 Hierarchical Neural Networks Biological Inspiration Mammalian Visual System Layers are not fully connected Each neuron has connections to a small and localized part of the neurons of the previous layers The global vision is created as we ascend in the layers towards the final layer They work with less units and connections Some intrinsic invariance to shifts and distortions

23 Hierarchical Neural Networks Local receptive fields Each neuron does not have a connection with all the neurons of the previous layers, making it s view local Deals only with a little and localized part of the information Shared-Weights A set of units represent the same template in different positions by sharing the same weight vector Reduces overfitting problems Reduces computational costs Subsampling The information is reduced from the input layer trought the output layer The information is reduced from the input layer trought the output layer improving generatilization

24 Neocognitron A biologically inspired Hierarchical Neural Network for visual recognition [Fukushima 80] Invariance to shifts and distortions Reduces the information progressively troughout the several layers from the input to the output. It allows the interpretation of the network operation Unlike Fully-Connected Networks

25 Neocognitron Network Architecture Cell represents a biological cell, has specific number of connections in a certain position Cell-plane a group of cells of the same type, which all recognize the same feature, like a feature map Layer set of cell-planes of the same type of cells Stage an ordered pair of layers in which the first is an S-cell Layer (simple) and the second a C-cell Layer (complex) cell cell-plane layer stage adapted from [Fukushima 03]

26 Neocognitron S-cells e C-cells S-cells represent simple cells in the visual cortex Extract features Learn to form a template of particular feature in particular position Share a weight-vector with all cells in their cell-plane C-cells In a cell-plane all cells extract the same feature in different positions Represent complex cells in the visual cortex Allow positional shifts in features It s output is a blurred version of their input

27 Learning in Neocognitron Sequential Learning Each stage is trained separately S-cells connections are changed by learning C-cells connections are fixed Starting from the stages close the input layer The training of a stage only starts after all the preceding one s is finished Unsupervised learning Except the final stage which corresponds to a classifier Assign a label to each cell-plane in the last stage if the winner cell has the same label as the input stimulus then reinforce connections If the winner cell has a different label from the input stimulus create a new cellplane for the presented input stimulus Other variations exist

28 Learning in Neocognitron Competitive learning in S-cells For each input only the cell-plane of the winner cell gets it s excitatory connections reinforced Inhibitory connections make each cell-plane speacialize in only one feature A selectivity threshold controls the number of cell-planes A small threshold makes the cells less selectives making the A small threshold makes the cells less selectives making the number of cell-planes small and vice-versa

29 Neocognitron Example Fig. Pattern examples Fig. Model operation figures from [Fukushima 88]

30 Neocognitron S-cell response selectivity constant variable excitatory connection weight excitatory input position cell-plane inhibitory input radius of the variable inhibitory root mean-square of connectable area connection weight the preceding signals decreasing function of v Fig. S-cell response [Fukushima 88] representing fixed excitatory connections

31 Neocognitron C-cell response fixed excitatory connection weight excitatory input position cell-plane radius of the connectable area representing saturation characteristic of C-cell

32 Parametrization Problem Highly susceptibe to parametrization unlike fully-connected networks Large number of parameters Number of layers Number of cell-planes in each layer Size of the receptive fields in each layer Overlap of the receptive fields Weight of the inhibitory connections... Solution Trial and Error Parameter Optimization e.g. Hill Climbing, Genetic Algorithm Use biologically plausible parameters Uncertainty in experimental data

33 Convolutional Neural Network (CNN) Inspired by the same biological principles as the Neocognitron but more loosely Local receptive fields, shared weights and subsampling Essentially an engineering model Back-propagation learning Network topology is hierarchical and not fully-connected Fig. Model operation [yann.lecun.com]

34 LISSOM Inspired in Self-Organizing Map but uses only local rules Uses Hebbian-Learning and includes lateral connections between neurons Biologically plausible learning mechanism Model gives to nontopographically organized lateral connectivity similar to that observed in the neocortex Proposed initially to explain only the organization of the primary visual cortex (RF-LISSOM) [Sirosh et al. 94] Later the model was extend to deal with natural images by modeling retina and LGN (CRF-LISSOM) [Bednar et al. 99] The model was further extend to deal with the entire visual cortex by adding a hierarchy of layers (HLISSOM) [Bednar et al. 01]

35 RF-LISSOM relation to SOM Self-Organizing Maps [Kohonen 82] Are a simple and efficient model for self-organization as an alternative to lateral connections winner neuron stimulus Will converge to globally topologically organized if training is appropriate update weights according to neuron learning neighborhood winner prefered rate amplitude stimulus Fig. SOM neighborhood example Fig. Color Map SOM [pymvpa.org]

36 RF-LISSOM RF-LISSOM Network Afferent Connections Lateral Connections Short-range excitatory lateral connections Long-range inhibitory lateral connections Weight Adaption Activity-dependent Unsupervised Local Fig. LISSOM Network Architecture [Bednar et al. 04]

37 RF-LISSOM Afferent, Excitatory Lateral and Inhibitory Lateral Connections learning is simultaneous Afferent connections are all excitatory Each neuron has reciprocal excitatory and inhibitory lateral connections with other neurons Neurons initial response is based on afferent connection weights Primary effect of lateral connections is to sharpen the Primary effect of lateral connections is to sharpen the contrast between areas of high and low activity

38 RF-LISSOM Ignoring lateral connections A neuron output is simply the value of the activation function for the scalar product of afferent input and the corresponding weights of the neuron Initial response does not have lateral connections Activation function makes the neuron selective and non-linear Only responds above the low-threshold Saturates at the upper-threshold Fig. Neuron Activation Function [Sirosh et al. 94]

39 RF-LISSOM Learning Neuron output is the value of the activation function for the weighted sum of Afferent input Excitatory lateral Inhibitory lateral Lateral weights develop through a version of the Hebb rule Excitatory and inhibitory weight vectors are update according pre and postsynaptic correlation between neurons Synaptic strength of lateral excitatory and inhibitory connections is kept constant Afferent connections update is analogous

40 RF-LISSOM Learning Activation thresholds are modified during learning to increase selectivity The lower and upper threshold come closer Depending on the activity of neurons Until they reach a prescribed limit Non-significant connections are eliminated according to a threshold

41 RF-LISSOM Model replicates orientation preference patterns of primary visual cortex Map is not globally topologically organized Locally organized only Fig. LISSOM development of orientation in V1 [Plebe et al. 07]

42 How to achieve invariant object recognition? Objected-centered representation hypothesis Objects represented as descriptions of spatial arrangements among parts in 3-D coordinates centered on the object itself View-based representation hypothesis Objects represented as collections of view-specific features Some psychophysical and physiological evidence View-invariant output is explicitly represented by a small number of neurons

43 Object-centered representation Object-centered model Objects represented as descriptions of spatial arrangements among parts in 3-D coordinates centered on the object itself Recognition by Components (RBC) Theory [Biederman et al. 87] Extract a view-invariant structural description of the object using volumetric primitives and their spatial relationships Compare the extracted description with stored object descriptions

44 View-based representation View-based model Invariance is achieved by pooling over afferents tuned to various transformations of the same stimulus [Perrett et al. 93] Computational Model [Poggio et al. 90] Train a set of view-tuned units Fed these units into another into a view-invariant unit New views are interpolated over the learnt views Some psychophysical and physiological evidence View-invariant output is explicitly represented by a small number of neurons Fig. Feedforward view-based model Fig. Feedforward view-based model [Riesenhuber 00]

45 HMAX Two pooling mechanisms Linear summation for simple cells Non-linear maximum for complex cells More robust response Clutter Multiple stimulli Linear summation for complex cells problems Unable to achieve size invariance Fig. Model Sketch [Riesenhuber et al. 99] Several units which react to the same stimulus on different scales are added up increasing the reaction to larger stimulus

46 HMAX Fig. HMAX Schematic [Serre et al. 07]

47 Feedfoward accounts for rapid categorization Rapid categorization is likely mostly feed-forward Given the number of processing stages and typical neural latency Humans were asked to detect if an animal was present in an image shown for 20 ms The results were similar to the HMAX model using only feed-forward connections Fig. Experiment Outline [Serre et al. 2007] Fig. Experiment Results [Serre et al. 2007]

48 Feedback EEG studies show that the human visual system can detect an object within 150 ms This implies that the role of feedback in such tasks is limited In clutter situations and multiple stimuli attention could play a major role

49 Future Directions Parametrization Attention Feedback Learning Role in Connections Population Codes

50 Questions

Higher Processing of Visual Information: Lecture II --- April 4, 2007 by Mu-ming Poo

Higher Processing of Visual Information: Lecture II --- April 4, 2007 by Mu-ming Poo Higher Processing of Visual Information: Lecture II April 4, 2007 by Muming Poo 1. Organization of Mammalian Visual Cortices 2. Structure of the Primary Visual Cortex layering, inputs, outputs, cell types

More information

Visual Motion Analysis by a Neural Network

Visual Motion Analysis by a Neural Network Visual Motion Analysis by a Neural Network Kansai University Takatsuki, Osaka 569 1095, Japan E-mail: fukushima@m.ieice.org (Submitted on December 12, 2006) Abstract In the visual systems of mammals, visual

More information

The functional organization of the visual cortex in primates

The functional organization of the visual cortex in primates The functional organization of the visual cortex in primates Dominated by LGN M-cell input Drosal stream for motion perception & spatial localization V5 LIP/7a V2 V4 IT Ventral stream for object recognition

More information

A Three-dimensional Physiologically Realistic Model of the Retina

A Three-dimensional Physiologically Realistic Model of the Retina A Three-dimensional Physiologically Realistic Model of the Retina Michael Tadross, Cameron Whitehouse, Melissa Hornstein, Vicky Eng and Evangelia Micheli-Tzanakou Department of Biomedical Engineering 617

More information

Representation Learning in Sensory Cortex: a theory

Representation Learning in Sensory Cortex: a theory CBMM Memo No. 026 November 14, 2014 Representation Learning in Sensory Cortex: a theory by Fabio Anselmi, and Tomaso Armando Poggio, Center for Brains, Minds and Machines, McGovern Institute, Massachusetts

More information

Hierarchy. Will Penny. 24th March Hierarchy. Will Penny. Linear Models. Convergence. Nonlinear Models. References

Hierarchy. Will Penny. 24th March Hierarchy. Will Penny. Linear Models. Convergence. Nonlinear Models. References 24th March 2011 Update Hierarchical Model Rao and Ballard (1999) presented a hierarchical model of visual cortex to show how classical and extra-classical Receptive Field (RF) effects could be explained

More information

Artificial Neural Network and Fuzzy Logic

Artificial Neural Network and Fuzzy Logic Artificial Neural Network and Fuzzy Logic 1 Syllabus 2 Syllabus 3 Books 1. Artificial Neural Networks by B. Yagnanarayan, PHI - (Cover Topologies part of unit 1 and All part of Unit 2) 2. Neural Networks

More information

15 Grossberg Network 1

15 Grossberg Network 1 Grossberg Network Biological Motivation: Vision Bipolar Cell Amacrine Cell Ganglion Cell Optic Nerve Cone Light Lens Rod Horizontal Cell Retina Optic Nerve Fiber Eyeball and Retina Layers of Retina The

More information

Can we synthesize learning? Sérgio Hortas Rodrigues IST, Aprendizagem Simbólica e Sub-Simbólica, Jun 2009

Can we synthesize learning? Sérgio Hortas Rodrigues IST, Aprendizagem Simbólica e Sub-Simbólica, Jun 2009 Hierarchical Neural Netw rks Can we synthesize learning? Sérgio Hortas Rodrigues IST, Aprendizagem Simbólica e Sub-Simbólica, Jun 2009 Topics Brain review Artificial Neurons Basic Neural Networks Back

More information

Pattern Recognition System with Top-Down Process of Mental Rotation

Pattern Recognition System with Top-Down Process of Mental Rotation Pattern Recognition System with Top-Down Process of Mental Rotation Shunji Satoh 1, Hirotomo Aso 1, Shogo Miyake 2, and Jousuke Kuroiwa 3 1 Department of Electrical Communications, Tohoku University Aoba-yama05,

More information

Charles Cadieu, Minjoon Kouh, Anitha Pasupathy, Charles E. Connor, Maximilian Riesenhuber and Tomaso Poggio

Charles Cadieu, Minjoon Kouh, Anitha Pasupathy, Charles E. Connor, Maximilian Riesenhuber and Tomaso Poggio Charles Cadieu, Minjoon Kouh, Anitha Pasupathy, Charles E. Connor, Maximilian Riesenhuber and Tomaso Poggio J Neurophysiol 98:733-75, 27. First published Jun 27, 27; doi:.52/jn.265.26 You might find this

More information

Boosting & Deep Learning

Boosting & Deep Learning Boosting & Deep Learning Ensemble Learning n So far learning methods that learn a single hypothesis, chosen form a hypothesis space that is used to make predictions n Ensemble learning à select a collection

More information

Sustained and transient channels

Sustained and transient channels Sustained and transient channels Chapter 5, pp. 141 164 19.12.2006 Some models of masking are based on two channels with different temporal properties Dual-channel models Evidence for two different channels

More information

Deep learning in the visual cortex

Deep learning in the visual cortex Deep learning in the visual cortex Thomas Serre Brown University. Fundamentals of primate vision. Computational mechanisms of rapid recognition and feedforward processing. Beyond feedforward processing:

More information

A Biologically-Inspired Model for Recognition of Overlapped Patterns

A Biologically-Inspired Model for Recognition of Overlapped Patterns A Biologically-Inspired Model for Recognition of Overlapped Patterns Mohammad Saifullah Department of Computer and Information Science Linkoping University, Sweden Mohammad.saifullah@liu.se Abstract. In

More information

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD WHAT IS A NEURAL NETWORK? The simplest definition of a neural network, more properly referred to as an 'artificial' neural network (ANN), is provided

More information

TIME-SEQUENTIAL SELF-ORGANIZATION OF HIERARCHICAL NEURAL NETWORKS. Ronald H. Silverman Cornell University Medical College, New York, NY 10021

TIME-SEQUENTIAL SELF-ORGANIZATION OF HIERARCHICAL NEURAL NETWORKS. Ronald H. Silverman Cornell University Medical College, New York, NY 10021 709 TIME-SEQUENTIAL SELF-ORGANIZATION OF HIERARCHICAL NEURAL NETWORKS Ronald H. Silverman Cornell University Medical College, New York, NY 10021 Andrew S. Noetzel polytechnic University, Brooklyn, NY 11201

More information

Tuning tuning curves. So far: Receptive fields Representation of stimuli Population vectors. Today: Contrast enhancment, cortical processing

Tuning tuning curves. So far: Receptive fields Representation of stimuli Population vectors. Today: Contrast enhancment, cortical processing Tuning tuning curves So far: Receptive fields Representation of stimuli Population vectors Today: Contrast enhancment, cortical processing Firing frequency N 3 s max (N 1 ) = 40 o N4 N 1 N N 5 2 s max

More information

Is the Human Visual System Invariant to Translation and Scale?

Is the Human Visual System Invariant to Translation and Scale? The AAAI 207 Spring Symposium on Science of Intelligence: Computational Principles of Natural and Artificial Intelligence Technical Report SS-7-07 Is the Human Visual System Invariant to Translation and

More information

RegML 2018 Class 8 Deep learning

RegML 2018 Class 8 Deep learning RegML 2018 Class 8 Deep learning Lorenzo Rosasco UNIGE-MIT-IIT June 18, 2018 Supervised vs unsupervised learning? So far we have been thinking of learning schemes made in two steps f(x) = w, Φ(x) F, x

More information

Competitive Learning for Deep Temporal Networks

Competitive Learning for Deep Temporal Networks Competitive Learning for Deep Temporal Networks Robert Gens Computer Science and Engineering University of Washington Seattle, WA 98195 rcg@cs.washington.edu Pedro Domingos Computer Science and Engineering

More information

Class Learning Data Representations: beyond DeepLearning: the Magic Theory. Tomaso Poggio. Thursday, December 5, 13

Class Learning Data Representations: beyond DeepLearning: the Magic Theory. Tomaso Poggio. Thursday, December 5, 13 Class 24-26 Learning Data Representations: beyond DeepLearning: the Magic Theory Tomaso Poggio Connection with the topic of learning theory 2 Notices of the American Mathematical Society (AMS), Vol. 50,

More information

Received 22 September 2006 Accepted 9 February 2007

Received 22 September 2006 Accepted 9 February 2007 International Journal of Humanoid Robotics Vol. 4, No. 2 (2007) 281 320 c World Scientific Publishing Company A MULTILAYER IN-PLACE LEARNING NETWORK FOR DEVELOPMENT OF GENERAL INVARIANCES JUYANG WENG,,,

More information

2- AUTOASSOCIATIVE NET - The feedforward autoassociative net considered in this section is a special case of the heteroassociative net.

2- AUTOASSOCIATIVE NET - The feedforward autoassociative net considered in this section is a special case of the heteroassociative net. 2- AUTOASSOCIATIVE NET - The feedforward autoassociative net considered in this section is a special case of the heteroassociative net. - For an autoassociative net, the training input and target output

More information

ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92

ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92 ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92 BIOLOGICAL INSPIRATIONS Some numbers The human brain contains about 10 billion nerve cells (neurons) Each neuron is connected to the others through 10000

More information

Realistic Modeling of Simple and Complex Cell Tuning in the HMAX Model, and Implications for Invariant Object Recognition in Cortex

Realistic Modeling of Simple and Complex Cell Tuning in the HMAX Model, and Implications for Invariant Object Recognition in Cortex massachusetts institute of technology computer science and artificial intelligence laboratory Realistic Modeling of Simple and Complex Cell Tuning in the HMAX Model, and Implications for Invariant Object

More information

On The Equivalence of Hierarchical Temporal Memory and Neural Nets

On The Equivalence of Hierarchical Temporal Memory and Neural Nets On The Equivalence of Hierarchical Temporal Memory and Neural Nets Bedeho Mesghina Wolde Mender December 7, 2009 Abstract In this paper we present a rigorous definition of classification in a common family

More information

Modeling retinal high and low contrast sensitivity lters. T. Lourens. Abstract

Modeling retinal high and low contrast sensitivity lters. T. Lourens. Abstract Modeling retinal high and low contrast sensitivity lters T. Lourens Department of Computer Science University of Groningen P.O. Box 800, 9700 AV Groningen, The Netherlands E-mail: tino@cs.rug.nl Abstract

More information

Introduction Principles of Signaling and Organization p. 3 Signaling in Simple Neuronal Circuits p. 4 Organization of the Retina p.

Introduction Principles of Signaling and Organization p. 3 Signaling in Simple Neuronal Circuits p. 4 Organization of the Retina p. Introduction Principles of Signaling and Organization p. 3 Signaling in Simple Neuronal Circuits p. 4 Organization of the Retina p. 5 Signaling in Nerve Cells p. 9 Cellular and Molecular Biology of Neurons

More information

Convolutional neural networks

Convolutional neural networks 11-1: Convolutional neural networks Prof. J.C. Kao, UCLA Convolutional neural networks Motivation Biological inspiration Convolution operation Convolutional layer Padding and stride CNN architecture 11-2:

More information

Artificial Neural Network

Artificial Neural Network Artificial Neural Network Contents 2 What is ANN? Biological Neuron Structure of Neuron Types of Neuron Models of Neuron Analogy with human NN Perceptron OCR Multilayer Neural Network Back propagation

More information

Skull-closed Autonomous Development: WWN-7 Dealing with Scales

Skull-closed Autonomous Development: WWN-7 Dealing with Scales Skull-closed Autonomous Development: WWN-7 Dealing with Scales Xiaofeng Wu, Qian Guo and Juyang Weng Abstract The Where-What Networks (WWNs) consist of a series of embodiments of a general-purpose brain-inspired

More information

Human Visual System Neural Network

Human Visual System Neural Network Proceedings of Student-Faculty Research Day, CSIS, Pace University, May 7 th, 2010 Human Visual System Neural Network Stanley Alphonso, Imran Afzal, Anand Phadake, Putta Reddy Shankar, and Charles Tappert

More information

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann (Feed-Forward) Neural Networks 2016-12-06 Dr. Hajira Jabeen, Prof. Jens Lehmann Outline In the previous lectures we have learned about tensors and factorization methods. RESCAL is a bilinear model for

More information

BASIC VISUAL SCIENCE CORE

BASIC VISUAL SCIENCE CORE BASIC VISUAL SCIENCE CORE Absolute and Increment Thresholds Ronald S. Harwerth Fall, 2016 1. Psychophysics of Vision 2. Light and Dark Adaptation Michael Kalloniatis and Charles Luu 1 The Neuron Doctrine

More information

Derived Distance: Beyond a model, towards a theory

Derived Distance: Beyond a model, towards a theory Derived Distance: Beyond a model, towards a theory 9.520 April 23 2008 Jake Bouvrie work with Steve Smale, Tomaso Poggio, Andrea Caponnetto and Lorenzo Rosasco Reference: Smale, S., T. Poggio, A. Caponnetto,

More information

Optimal In-Place Self-Organization for Cortical Development: Limited Cells, Sparse Coding and Cortical Topography

Optimal In-Place Self-Organization for Cortical Development: Limited Cells, Sparse Coding and Cortical Topography Optimal In-Place Self-Organization for Cortical Development: Limited Cells, Sparse Coding and Cortical Topography Juyang Weng and Matthew D. Luciw Department of Computer Science and Engineering Michigan

More information

Effects of Interactive Function Forms and Refractoryperiod in a Self-Organized Critical Model Based on Neural Networks

Effects of Interactive Function Forms and Refractoryperiod in a Self-Organized Critical Model Based on Neural Networks Commun. Theor. Phys. (Beijing, China) 42 (2004) pp. 121 125 c International Academic Publishers Vol. 42, No. 1, July 15, 2004 Effects of Interactive Function Forms and Refractoryperiod in a Self-Organized

More information

Convolutional Neural Networks

Convolutional Neural Networks Convolutional Neural Networks Books» http://www.deeplearningbook.org/ Books http://neuralnetworksanddeeplearning.com/.org/ reviews» http://www.deeplearningbook.org/contents/linear_algebra.html» http://www.deeplearningbook.org/contents/prob.html»

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence Jeff Clune Assistant Professor Evolving Artificial Intelligence Laboratory Announcements Be making progress on your projects! Three Types of Learning Unsupervised Supervised Reinforcement

More information

Invariant object recognition in the visual system with error correction and temporal difference learning

Invariant object recognition in the visual system with error correction and temporal difference learning INSTITUTE OF PHYSICS PUBLISHING NETWORK: COMPUTATION IN NEURAL SYSTEMS Network: Comput. Neural Syst. (00) 9 www.iop.org/journals/ne PII: S0954-898X(0)488-9 Invariant object recognition in the visual system

More information

Fundamentals of Computational Neuroscience 2e

Fundamentals of Computational Neuroscience 2e Fundamentals of Computational Neuroscience 2e January 1, 2010 Chapter 10: The cognitive brain Hierarchical maps and attentive vision A. Ventral visual pathway B. Layered cortical maps Receptive field size

More information

A General Mechanism for Tuning: Gain Control Circuits and Synapses Underlie Tuning of Cortical Neurons

A General Mechanism for Tuning: Gain Control Circuits and Synapses Underlie Tuning of Cortical Neurons massachusetts institute of technology computer science and artificial intelligence laboratory A General Mechanism for Tuning: Gain Control Circuits and Synapses Underlie Tuning of Cortical Neurons Minjoon

More information

Visual motion processing and perceptual decision making

Visual motion processing and perceptual decision making Visual motion processing and perceptual decision making Aziz Hurzook (ahurzook@uwaterloo.ca) Oliver Trujillo (otrujill@uwaterloo.ca) Chris Eliasmith (celiasmith@uwaterloo.ca) Centre for Theoretical Neuroscience,

More information

Computer Science and Artificial Intelligence Laboratory Technical Report

Computer Science and Artificial Intelligence Laboratory Technical Report Computer Science and Artificial Intelligence Laboratory Technical Report MIT-CSAIL-TR-2013-019 CBCL-313 August 6, 2013 Does invariant recognition predict tuning of neurons in sensory cortex? Tomaso Poggio,

More information

Spatial Vision: Primary Visual Cortex (Chapter 3, part 1)

Spatial Vision: Primary Visual Cortex (Chapter 3, part 1) Spatial Vision: Primary Visual Cortex (Chapter 3, part 1) Lecture 6 Jonathan Pillow Sensation & Perception (PSY 345 / NEU 325) Princeton University, Spring 2015 1 Chapter 2 remnants 2 Receptive field:

More information

Using a Hopfield Network: A Nuts and Bolts Approach

Using a Hopfield Network: A Nuts and Bolts Approach Using a Hopfield Network: A Nuts and Bolts Approach November 4, 2013 Gershon Wolfe, Ph.D. Hopfield Model as Applied to Classification Hopfield network Training the network Updating nodes Sequencing of

More information

Neuroinformatics. Marcus Kaiser. Week 10: Cortical maps and competitive population coding (textbook chapter 7)!

Neuroinformatics. Marcus Kaiser. Week 10: Cortical maps and competitive population coding (textbook chapter 7)! 0 Neuroinformatics Marcus Kaiser Week 10: Cortical maps and competitive population coding (textbook chapter 7)! Outline Topographic maps Self-organizing maps Willshaw & von der Malsburg Kohonen Dynamic

More information

Tilt-aftereffect and adaptation of V1 neurons

Tilt-aftereffect and adaptation of V1 neurons Tilt-aftereffect and adaptation of V1 neurons Dezhe Jin Department of Physics The Pennsylvania State University Outline The tilt aftereffect (TAE) Classical model of neural basis of TAE Neural data on

More information

Plasticity and Learning

Plasticity and Learning Chapter 8 Plasticity and Learning 8.1 Introduction Activity-dependent synaptic plasticity is widely believed to be the basic phenomenon underlying learning and memory, and it is also thought to play a

More information

DESIGNING CNN GENES. Received January 23, 2003; Revised April 2, 2003

DESIGNING CNN GENES. Received January 23, 2003; Revised April 2, 2003 Tutorials and Reviews International Journal of Bifurcation and Chaos, Vol. 13, No. 10 (2003 2739 2824 c World Scientific Publishing Company DESIGNING CNN GENES MAKOTO ITOH Department of Information and

More information

Position Variance, Recurrence and Perceptual Learning

Position Variance, Recurrence and Perceptual Learning Position Variance, Recurrence and Perceptual Learning Zhaoping Li Peter Dayan Gatsby Computational Neuroscience Unit 7 Queen Square, London, England, WCN 3AR. zhaoping@gatsby.ucl.ac.uk dayan@gatsby.ucl.ac.uk

More information

THE COMPUTATIONAL MAGIC OF THE VENTRAL STREAM

THE COMPUTATIONAL MAGIC OF THE VENTRAL STREAM December 6, 2011 THE COMPUTATIONAL MAGIC OF THE VENTRAL STREAM Online archived report: usage instructions. This is version 2 of a report first published online on July 20, 2011 (npre.2011.6117.1) and it

More information

Neural Networks Based on Competition

Neural Networks Based on Competition Neural Networks Based on Competition In some examples of pattern classification we encountered a situation in which the net was trained to classify the input signal into one of the output categories, while

More information

Viewpoint invariant face recognition using independent component analysis and attractor networks

Viewpoint invariant face recognition using independent component analysis and attractor networks Viewpoint invariant face recognition using independent component analysis and attractor networks Marian Stewart Bartlett University of California San Diego The Salk Institute La Jolla, CA 92037 marni@salk.edu

More information

Emergence of Phase- and Shift-Invariant Features by Decomposition of Natural Images into Independent Feature Subspaces

Emergence of Phase- and Shift-Invariant Features by Decomposition of Natural Images into Independent Feature Subspaces LETTER Communicated by Bartlett Mel Emergence of Phase- and Shift-Invariant Features by Decomposition of Natural Images into Independent Feature Subspaces Aapo Hyvärinen Patrik Hoyer Helsinki University

More information

EE04 804(B) Soft Computing Ver. 1.2 Class 2. Neural Networks - I Feb 23, Sasidharan Sreedharan

EE04 804(B) Soft Computing Ver. 1.2 Class 2. Neural Networks - I Feb 23, Sasidharan Sreedharan EE04 804(B) Soft Computing Ver. 1.2 Class 2. Neural Networks - I Feb 23, 2012 Sasidharan Sreedharan www.sasidharan.webs.com 3/1/2012 1 Syllabus Artificial Intelligence Systems- Neural Networks, fuzzy logic,

More information

Ventral Visual Stream and Deep Networks

Ventral Visual Stream and Deep Networks Ma191b Winter 2017 Geometry of Neuroscience References for this lecture: Tomaso A. Poggio and Fabio Anselmi, Visual Cortex and Deep Networks, MIT Press, 2016 F. Cucker, S. Smale, On the mathematical foundations

More information

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others) Machine Learning Neural Networks (slides from Domingos, Pardo, others) For this week, Reading Chapter 4: Neural Networks (Mitchell, 1997) See Canvas For subsequent weeks: Scaling Learning Algorithms toward

More information

Natural Image Statistics

Natural Image Statistics Natural Image Statistics A probabilistic approach to modelling early visual processing in the cortex Dept of Computer Science Early visual processing LGN V1 retina From the eye to the primary visual cortex

More information

Artificial Neural Networks D B M G. Data Base and Data Mining Group of Politecnico di Torino. Elena Baralis. Politecnico di Torino

Artificial Neural Networks D B M G. Data Base and Data Mining Group of Politecnico di Torino. Elena Baralis. Politecnico di Torino Artificial Neural Networks Data Base and Data Mining Group of Politecnico di Torino Elena Baralis Politecnico di Torino Artificial Neural Networks Inspired to the structure of the human brain Neurons as

More information

Neural Networks and Fuzzy Logic Rajendra Dept.of CSE ASCET

Neural Networks and Fuzzy Logic Rajendra Dept.of CSE ASCET Unit-. Definition Neural network is a massively parallel distributed processing system, made of highly inter-connected neural computing elements that have the ability to learn and thereby acquire knowledge

More information

Leo Kadanoff and 2d XY Models with Symmetry-Breaking Fields. renormalization group study of higher order gradients, cosines and vortices

Leo Kadanoff and 2d XY Models with Symmetry-Breaking Fields. renormalization group study of higher order gradients, cosines and vortices Leo Kadanoff and d XY Models with Symmetry-Breaking Fields renormalization group study of higher order gradients, cosines and vortices Leo Kadanoff and Random Matrix Theory Non-Hermitian Localization in

More information

Data Mining Part 5. Prediction

Data Mining Part 5. Prediction Data Mining Part 5. Prediction 5.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline How the Brain Works Artificial Neural Networks Simple Computing Elements Feed-Forward Networks Perceptrons (Single-layer,

More information

The error-backpropagation algorithm is one of the most important and widely used (and some would say wildly used) learning techniques for neural

The error-backpropagation algorithm is one of the most important and widely used (and some would say wildly used) learning techniques for neural 1 2 The error-backpropagation algorithm is one of the most important and widely used (and some would say wildly used) learning techniques for neural networks. First we will look at the algorithm itself

More information

Visual System. Anatomy of the Visual System. Advanced article

Visual System. Anatomy of the Visual System. Advanced article Stephen D Van Hooser, Brandeis University, Waltham, Massachusetts, USA Sacha B Nelson, Brandeis University, Waltham, Massachusetts, USA Humans and many other animals obtain much of their information about

More information

Limulus. The Neural Code. Response of Visual Neurons 9/21/2011

Limulus. The Neural Code. Response of Visual Neurons 9/21/2011 Crab cam (Barlow et al., 2001) self inhibition recurrent inhibition lateral inhibition - L16. Neural processing in Linear Systems: Temporal and Spatial Filtering C. D. Hopkins Sept. 21, 2011 The Neural

More information

Artificial Neural Networks Examination, March 2004

Artificial Neural Networks Examination, March 2004 Artificial Neural Networks Examination, March 2004 Instructions There are SIXTY questions (worth up to 60 marks). The exam mark (maximum 60) will be added to the mark obtained in the laborations (maximum

More information

Instituto Tecnológico y de Estudios Superiores de Occidente Departamento de Electrónica, Sistemas e Informática. Introductory Notes on Neural Networks

Instituto Tecnológico y de Estudios Superiores de Occidente Departamento de Electrónica, Sistemas e Informática. Introductory Notes on Neural Networks Introductory Notes on Neural Networs Dr. José Ernesto Rayas Sánche April Introductory Notes on Neural Networs Dr. José Ernesto Rayas Sánche BIOLOGICAL NEURAL NETWORKS The brain can be seen as a highly

More information

Introduction to Neural Networks

Introduction to Neural Networks Introduction to Neural Networks What are (Artificial) Neural Networks? Models of the brain and nervous system Highly parallel Process information much more like the brain than a serial computer Learning

More information

Sparse Coding as a Generative Model

Sparse Coding as a Generative Model Sparse Coding as a Generative Model image vector neural activity (sparse) feature vector other stuff Find activations by descending E Coefficients via gradient descent Driving input (excitation) Lateral

More information

How to do backpropagation in a brain

How to do backpropagation in a brain How to do backpropagation in a brain Geoffrey Hinton Canadian Institute for Advanced Research & University of Toronto & Google Inc. Prelude I will start with three slides explaining a popular type of deep

More information

Marr's Theory of the Hippocampus: Part I

Marr's Theory of the Hippocampus: Part I Marr's Theory of the Hippocampus: Part I Computational Models of Neural Systems Lecture 3.3 David S. Touretzky October, 2015 David Marr: 1945-1980 10/05/15 Computational Models of Neural Systems 2 Marr

More information

Modeling of Retinal Ganglion Cell Responses to Electrical Stimulation with Multiple Electrodes L.A. Hruby Salk Institute for Biological Studies

Modeling of Retinal Ganglion Cell Responses to Electrical Stimulation with Multiple Electrodes L.A. Hruby Salk Institute for Biological Studies Modeling of Retinal Ganglion Cell Responses to Electrical Stimulation with Multiple Electrodes L.A. Hruby Salk Institute for Biological Studies Introduction Since work on epiretinal electrical stimulation

More information

Mid Year Project Report: Statistical models of visual neurons

Mid Year Project Report: Statistical models of visual neurons Mid Year Project Report: Statistical models of visual neurons Anna Sotnikova asotniko@math.umd.edu Project Advisor: Prof. Daniel A. Butts dab@umd.edu Department of Biology Abstract Studying visual neurons

More information

Outline. NIP: Hebbian Learning. Overview. Types of Learning. Neural Information Processing. Amos Storkey

Outline. NIP: Hebbian Learning. Overview. Types of Learning. Neural Information Processing. Amos Storkey Outline NIP: Hebbian Learning Neural Information Processing Amos Storkey 1/36 Overview 2/36 Types of Learning Types of learning, learning strategies Neurophysiology, LTP/LTD Basic Hebb rule, covariance

More information

Artificial Neural Networks. Edward Gatt

Artificial Neural Networks. Edward Gatt Artificial Neural Networks Edward Gatt What are Neural Networks? Models of the brain and nervous system Highly parallel Process information much more like the brain than a serial computer Learning Very

More information

Neural Networks 2. 2 Receptive fields and dealing with image inputs

Neural Networks 2. 2 Receptive fields and dealing with image inputs CS 446 Machine Learning Fall 2016 Oct 04, 2016 Neural Networks 2 Professor: Dan Roth Scribe: C. Cheng, C. Cervantes Overview Convolutional Neural Networks Recurrent Neural Networks 1 Introduction There

More information

PV021: Neural networks. Tomáš Brázdil

PV021: Neural networks. Tomáš Brázdil 1 PV021: Neural networks Tomáš Brázdil 2 Course organization Course materials: Main: The lecture Neural Networks and Deep Learning by Michael Nielsen http://neuralnetworksanddeeplearning.com/ (Extremely

More information

Neural Networks: Introduction

Neural Networks: Introduction Neural Networks: Introduction Machine Learning Fall 2017 Based on slides and material from Geoffrey Hinton, Richard Socher, Dan Roth, Yoav Goldberg, Shai Shalev-Shwartz and Shai Ben-David, and others 1

More information

How to make computers work like the brain

How to make computers work like the brain How to make computers work like the brain (without really solving the brain) Dileep George a single special machine can be made to do the work of all. It could in fact be made to work as a model of any

More information

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others) Machine Learning Neural Networks (slides from Domingos, Pardo, others) For this week, Reading Chapter 4: Neural Networks (Mitchell, 1997) See Canvas For subsequent weeks: Scaling Learning Algorithms toward

More information

Complexity of Representation and Inference in Compositional Models with Part Sharing

Complexity of Representation and Inference in Compositional Models with Part Sharing Journal of Machine Learning Research 17 (2016) 1-28 Submitted 7/13; Revised 5/15; Published 4/16 Complexity of Representation and Inference in Compositional Models with Part Sharing Alan Yuille Departments

More information

Lecture 4: Feed Forward Neural Networks

Lecture 4: Feed Forward Neural Networks Lecture 4: Feed Forward Neural Networks Dr. Roman V Belavkin Middlesex University BIS4435 Biological neurons and the brain A Model of A Single Neuron Neurons as data-driven models Neural Networks Training

More information

How do biological neurons learn? Insights from computational modelling of

How do biological neurons learn? Insights from computational modelling of How do biological neurons learn? Insights from computational modelling of neurobiological experiments Lubica Benuskova Department of Computer Science University of Otago, New Zealand Brain is comprised

More information

Effects of Interactive Function Forms in a Self-Organized Critical Model Based on Neural Networks

Effects of Interactive Function Forms in a Self-Organized Critical Model Based on Neural Networks Commun. Theor. Phys. (Beijing, China) 40 (2003) pp. 607 613 c International Academic Publishers Vol. 40, No. 5, November 15, 2003 Effects of Interactive Function Forms in a Self-Organized Critical Model

More information

What and where: A Bayesian inference theory of attention

What and where: A Bayesian inference theory of attention What and where: A Bayesian inference theory of attention Sharat Chikkerur, Thomas Serre, Cheston Tan & Tomaso Poggio CBCL, McGovern Institute for Brain Research, MIT Preliminaries Outline Perception &

More information

of the dynamics. There is a competition between the capacity of the network and the stability of the

of the dynamics. There is a competition between the capacity of the network and the stability of the Special Issue on the Role and Control of Random Events in Biological Systems c World Scientic Publishing Company LEARNING SYNFIRE CHAINS: TURNING NOISE INTO SIGNAL JOHN HERTZ and ADAM PRUGEL-BENNETT y

More information

LoopSOM: A Robust SOM Variant Using Self-Organizing Temporal Feedback Connections

LoopSOM: A Robust SOM Variant Using Self-Organizing Temporal Feedback Connections LoopSOM: A Robust SOM Variant Using Self-Organizing Temporal Feedback Connections Rafael C. Pinto, Paulo M. Engel Instituto de Informática Universidade Federal do Rio Grande do Sul (UFRGS) P.O. Box 15.064

More information

Jan 16: The Visual System

Jan 16: The Visual System Geometry of Neuroscience Matilde Marcolli & Doris Tsao Jan 16: The Visual System References for this lecture 1977 Hubel, D. H., Wiesel, T. N., Ferrier lecture 2010 Freiwald, W., Tsao, DY. Functional compartmentalization

More information

Sub-Riemannian geometry in models of the visual cortex

Sub-Riemannian geometry in models of the visual cortex Sub-Riemannian geometry in models of the visual cortex Scott Pauls Department of Mathematics Dartmouth College 4 th Symposium on Analysis and PDE Purdue, May 27, 2009 Mathematical structure in the visual

More information

Classification goals: Make 1 guess about the label (Top-1 error) Make 5 guesses about the label (Top-5 error) No Bounding Box

Classification goals: Make 1 guess about the label (Top-1 error) Make 5 guesses about the label (Top-5 error) No Bounding Box ImageNet Classification with Deep Convolutional Neural Networks Alex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton Motivation Classification goals: Make 1 guess about the label (Top-1 error) Make 5 guesses

More information

9.01 Introduction to Neuroscience Fall 2007

9.01 Introduction to Neuroscience Fall 2007 MIT OpenCourseWare http://ocw.mit.edu 9.01 Introduction to Neuroscience Fall 2007 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. Complex cell receptive

More information

Brain-Like Learning Directly from Dynamic Cluttered Natural Video

Brain-Like Learning Directly from Dynamic Cluttered Natural Video Brain-Like Learning Directly from Dynamic Cluttered Natural Video Yuekai Wang 1,2, Xiaofeng Wu 1,2, and Juyang Weng 3,4,5, 1 State Key Lab. of ASIC & System 2 Department of Electronic Engineering Fudan

More information

Adaptation in the Neural Code of the Retina

Adaptation in the Neural Code of the Retina Adaptation in the Neural Code of the Retina Lens Retina Fovea Optic Nerve Optic Nerve Bottleneck Neurons Information Receptors: 108 95% Optic Nerve 106 5% After Polyak 1941 Visual Cortex ~1010 Mean Intensity

More information

Auto-correlation of retinal ganglion cell mosaics shows hexagonal structure

Auto-correlation of retinal ganglion cell mosaics shows hexagonal structure Supplementary Discussion Auto-correlation of retinal ganglion cell mosaics shows hexagonal structure Wässle and colleagues first observed that the local structure of cell mosaics was approximately hexagonal

More information

Neural Network to Control Output of Hidden Node According to Input Patterns

Neural Network to Control Output of Hidden Node According to Input Patterns American Journal of Intelligent Systems 24, 4(5): 96-23 DOI:.5923/j.ajis.2445.2 Neural Network to Control Output of Hidden Node According to Input Patterns Takafumi Sasakawa, Jun Sawamoto 2,*, Hidekazu

More information

Machine Learning for Signal Processing Neural Networks Continue. Instructor: Bhiksha Raj Slides by Najim Dehak 1 Dec 2016

Machine Learning for Signal Processing Neural Networks Continue. Instructor: Bhiksha Raj Slides by Najim Dehak 1 Dec 2016 Machine Learning for Signal Processing Neural Networks Continue Instructor: Bhiksha Raj Slides by Najim Dehak 1 Dec 2016 1 So what are neural networks?? Voice signal N.Net Transcription Image N.Net Text

More information

Institut für Theoretische Physik Lehrstuhl Univ.-Prof. Dr. J. Leo van Hemmen der Technischen Universität München

Institut für Theoretische Physik Lehrstuhl Univ.-Prof. Dr. J. Leo van Hemmen der Technischen Universität München Institut für Theoretische Physik Lehrstuhl Univ.-Prof. Dr. J. Leo van Hemmen der Technischen Universität München Orientation Maps in Primary Visual Cortex: A Hebbian Model of Intracortical and Geniculocortical

More information

Feature Design. Feature Design. Feature Design. & Deep Learning

Feature Design. Feature Design. Feature Design. & Deep Learning Artificial Intelligence and its applications Lecture 9 & Deep Learning Professor Daniel Yeung danyeung@ieee.org Dr. Patrick Chan patrickchan@ieee.org South China University of Technology, China Appropriately

More information