Deep learning in the visual cortex
|
|
- Paul Gardner
- 5 years ago
- Views:
Transcription
1 Deep learning in the visual cortex Thomas Serre Brown University. Fundamentals of primate vision. Computational mechanisms of rapid recognition and feedforward processing. Beyond feedforward processing: Attentional mechanisms and cortical feedback
2 Ventral visual stream Feedforward processing
3 Coarse initial base representation - Enables rapid object detection/recognition ( what is there? ) - nsufficient for object localization - Sensitive to presence of clutter Feedforward processing
4 The crowding problem isolated objects L E F F L EE L E F F EE F see also Dayan 08; Pelli & Tillman 08; and many others
5 The crowding problem isolated objects L E F F EE L L E F F EE F see also Dayan 08; Pelli & Tillman 08; and many others
6 The crowding problem isolated objects L E F F EE L clutter L E F F EE F see also Dayan 08; Pelli & Tillman 08; and many others L E F F FL EE
7 The crowding problem Prediction: recognition in clutter requires attentional mechanisms via cortical feedback isolated objects L E F F EE L clutter L E F F EE F see also Dayan 08; Pelli & Tillman 08; and many others spatial attention L E F L E F F FL EE F FL EE
8 The crowding problem Prediction: recognition in clutter requires attentional mechanisms via cortical feedback isolated objects L E F F EE L clutter L E F F EE F see also Dayan 08; Pelli & Tillman 08; and many others spatial attention L E F L E F E E E E X XX F FL FL F
9 1. Monkey electrophysiology Read-out of inferior temporal cortex population activity: Spatial attention eliminates clutter with Zhang, Meyers, Bichot, Poggio & Desimone 2. Computational model of integrated attention and recognition What and where: a Bayesian attention theory of attention with Chikkerur, Tan & Poggio
10 1. Monkey electrophysiology Read-out of inferior temporal cortex population activity: Spatial attention eliminates clutter with Zhang, Meyers, Bichot, Poggio & Desimone 2. Computational model of integrated attention and recognition What and where: a Bayesian attention theory of attention with Chikkerur, Tan & Poggio
11 The readout approach neuron 1 neuron 2 neuron 3 neuron n Prediction Pattern Classifier Zhang Meyers Bichot Serre Poggio Desimone PNAS 11
12 The experiment Zhang Meyers Bichot Serre Poggio Desimone unpublished data
13 The experiment train readout classifier on isolated object test generalization in clutter + + Zhang* Meyers* Bichot Serre Poggio Desimone in sub
14 The experiment train readout classifier on isolated object test generalization in clutter isolated + + Zhang* Meyers* Bichot Serre Poggio Desimone in sub
15 The experiment train readout classifier on isolated object test generalization in clutter isolated + + Zhang* Meyers* Bichot Serre Poggio Desimone in sub
16 The experiment train readout classifier on isolated object test generalization in clutter isolated + + attended Zhang* Meyers* Bichot Serre Poggio Desimone in sub
17 The experiment train readout classifier on isolated object test generalization in clutter isolated + + attended Non-attended Zhang* Meyers* Bichot Serre Poggio Desimone in sub
18 Effects of attention on decoding accuracy: Decoding category Decoding object identity Zhang* Meyers* Bichot Serre Poggio Desimone in sub
19 Effects of attention on decoding accuracy: Decoding category Decoding object identity + Zhang* Meyers* Bichot Serre Poggio Desimone in sub
20 Effects of attention on decoding accuracy: Decoding category Decoding object identity + Zhang* Meyers* Bichot Serre Poggio Desimone in sub
21 Effects of attention on decoding accuracy: Decoding category Decoding object identity + Zhang* Meyers* Bichot Serre Poggio Desimone in sub
22 Effects of attention on decoding accuracy: Decoding category Decoding object identity + Zhang* Meyers* Bichot Serre Poggio Desimone in sub
23 Effects of attention on decoding accuracy: Decoding category Decoding object identity + Zhang* Meyers* Bichot Serre Poggio Desimone in sub
24 Effects of attention on decoding accuracy: Decoding category Decoding target location + Zhang* Meyers* Bichot Serre Poggio Desimone in sub
25 Changes in the salience of distractor stimuli dominate over attention related enhancements Aligned to the time when one of the distractors underwent a change Zhang* Meyers* Bichot Serre Poggio Desimone in sub
26 Summary
27 Summary Consistent with feedforward hierarchical models: n the absence of attention information about the identity of individual objects (and position) in clutter is greatly reduced relative to when objects are shown in isolation
28 Summary Consistent with feedforward hierarchical models: n the absence of attention information about the identity of individual objects (and position) in clutter is greatly reduced relative to when objects are shown in isolation Attention seems to restore the pattern of neural activity toward the vector representing the isolated object
29 Summary Consistent with feedforward hierarchical models: n the absence of attention information about the identity of individual objects (and position) in clutter is greatly reduced relative to when objects are shown in isolation Attention seems to restore the pattern of neural activity toward the vector representing the isolated object n spite of this nearly exclusive representation of the attended object, an increase in the salience of non-attended objects overrode these attentional enhancements
30 Summary Consistent with feedforward hierarchical models: n the absence of attention information about the identity of individual objects (and position) in clutter is greatly reduced relative to when objects are shown in isolation Attention seems to restore the pattern of neural activity toward the vector representing the isolated object n spite of this nearly exclusive representation of the attended object, an increase in the salience of non-attended objects overrode these attentional enhancements Results provide computational level explanation for how attention operates on neural representations to solve the problem of invariant recognition in clutter
31 1. Monkey electrophysiology Read-out of inferior temporal cortex population activity: Spatial attention eliminates clutter with Zhang, Meyers, Bichot, Poggio & Desimone 2. Computational model of integrated attention and recognition What and where: a Bayesian attention theory of attention with Chikkerur, Tan & Poggio
32 image measurements Perception as Bayesian inference Mumford 92; Knill & Richards 96; Dayan & Zemel 99; Rao 02 04; Kersten & Yuille 03; Kersten et al 04; Lee & Mumford 03; Dean 05; George & Hawkins 05 09; Yu & Dayan 05; Hinton 07; Epshtein et al 08; Murray & Kreutz-Delgado 07
33 image measurements Perception as Bayesian inference Mumford 92; Knill & Richards 96; Dayan & Zemel 99; Rao 02 04; Kersten & Yuille 03; Kersten et al 04; Lee & Mumford 03; Dean 05; George & Hawkins 05 09; Yu & Dayan 05; Hinton 07; Epshtein et al 08; Murray & Kreutz-Delgado 07
34 S visual scene description image measurements Perception as Bayesian inference Mumford 92; Knill & Richards 96; Dayan & Zemel 99; Rao 02 04; Kersten & Yuille 03; Kersten et al 04; Lee & Mumford 03; Dean 05; George & Hawkins 05 09; Yu & Dayan 05; Hinton 07; Epshtein et al 08; Murray & Kreutz-Delgado 07
35 P (S ) / P ( S)P (S) S visual scene description image measurements Perception as Bayesian inference Mumford 92; Knill & Richards 96; Dayan & Zemel 99; Rao 02 04; Kersten & Yuille 03; Kersten et al 04; Lee & Mumford 03; Dean 05; George & Hawkins 05 09; Yu & Dayan 05; Hinton 07; Epshtein et al 08; Murray & Kreutz-Delgado 07
36 Hypothesis #1 To recognize and localize objects in a scene, the visual system selects objects one at a time P (O, L, ) O,L visual scene description image measurements Chikkerur Serre Tan & Poggio 10;
37 Two independent streams of processing for object and location Hypothesis #2 Explicit tuning for position (shape implicit) Dorsal where stream P (O)P (L)P ( L, O) O object L PFC location image measurements Ventral what stream Explicit tuning for shape (position implicit) Chikkerur Serre Tan & Poggio 10;
38 Hypothesis #3 Objects encoded by collections of generic features (cond. ind. given an object and its location) Explicit tuning for position (shape implicit) Dorsal where stream F i PFC X i... Ventral what stream Explicit tuning for shape (position implicit) Chikkerur Serre Tan & Poggio 10;
39 Perception as O object Bayesian inference location F i position and scale tolerant features (T) L L X i N retinotopic features (V4) X i F i O image measurements Chikkerur Serre Tan & Poggio 10;
40 Perception as Bayesian inference p(l ) saliency map L X i F i O Chikkerur Serre Tan & Poggio 10;
41 Perception as O object Bayesian inference Goal of visual perception is to estimate posterior probabilities of visual features, objects and their locations in an image Attention corresponds to conditioning on high-level latent variables representing particular objects or locations (as well as on sensory input), and doing inference over the other latent variables location L F i X i N position and scale tolerant features (T) retinotopic features (V4) image measurements Here we used belief propagation to solve the inference problem Biologically-plausible implementations of belief propagation: Zemel et al. 98; Beck & Pouget 07; Deneve 08; George 08; Litvak & Ullman 09; Rao 04; Steimer et al. 09
42 L dorsal / O ventral / X i O where what F i F i L X i N P (X i ) = P ( X i ) P F i,l P (Xi F i,l)p(l)p(f i ) n PX P ( X i ) P o i F i,l P (Xi F i,l)p(l)p(f i ) Bayesian inference and attention Special case of the normalization model of attention by Reynolds & Heeger 09
43 L dorsal / O ventral / X i O where what F i F i L feedforward input X i N P (X i ) = P ( X i ) P F i,l P (Xi F i,l)p(l)p(f i ) n PX P ( X i ) P o i F i,l P (Xi F i,l)p(l)p(f i ) Bayesian inference and attention feedforward (bottom-up) sweep Special case of the normalization model of attention by Reynolds & Heeger 09
44 L dorsal / O ventral / X i O where what F i F i L X i X k feedforward input N P (X i ) = P ( X i ) P F i,l P (Xi F i,l)p(l)p(f i ) n PX P ( X i ) P o i F i,l P (Xi F i,l)p(l)p(f i ) suppressive drive Bayesian inference and attention feedforward (bottom-up) sweep Special case of the normalization model of attention by Reynolds & Heeger 09
45 L dorsal / O ventral / X i O where what F i top-down spatial attention L F i top-down featurebased attention X i X k feedforward input attentional modulation N P (X i ) = P ( X i ) P F i,l P (Xi F i,l)p(l)p(f i ) n PX P ( X i ) P o i F i,l P (Xi F i,l)p(l)p(f i ) suppressive drive Bayesian inference and attention feedforward (bottom-up) sweep Special case of the normalization model of attention by Reynolds & Heeger 09
46 dorsal / where O ventral / what F i P(L ) L X i X k feedforward input P (X i ) = P ( X i ) P F i,l P (Xi F i,l)p(l)p(f i ) n PX P ( X i ) P o i F i,l P (Xi F i,l)p(l)p(f i ) suppressive drive Bayesian inference and feedforward (bottom-up) sweep attention
47 (parallel) dorsal / where O ventral / what feature-based attention Visual search F i L X i feedforward input attentional modulation N P (X i ) = P ( X i ) P F i,l P (Xi F i,l)p(l)p(f i ) n PX P ( X i ) P o i F i,l P (Xi F i,l)p(l)p(f i ) suppressive drive Bayesian inference and attention
48 dorsal / where O ventral / what T F i F i V4 X i L X i feedforward input attentional modulation N P (X i ) = P ( X i ) P F i,l P (Xi F i,l)p(l)p(f i ) n PX P ( X i ) P o i F i,l P (Xi F i,l)p(l)p(f i ) suppressive drive Bayesian inference and attention Consistent with data from V4 by Bichot et al 05
49 dorsal / where O ventral / what T F i F i V4 X i L X i feedforward input attentional modulation N P (X i ) = P ( X i ) P F i,l P (Xi F i,l)p(l)p(f i ) n PX P ( X i ) P o i F i,l P (Xi F i,l)p(l)p(f i ) suppressive drive Bayesian inference and attention Consistent with data from V4 by Bichot et al 05
50 P (X i ) = P ( X i ) P F i,l P (Xi F i,l)p(l)p(f i ) n PX P ( X i ) P o i F i,l P (Xi F i,l)p(l)p(f i ) O Visual search F i L Experimental data a b c Model data X i N Bayesian inference and attention Consistent with data from V4 by Bichot et al 05
51 (serial) spatial dorsal / where O ventral / what attention Spatial cueing F i L X i feedforward input attentional modulation N P (X i ) = P ( X i ) P F i,l P (Xi F i,l)p(l)p(f i ) n PX P ( X i ) P o i F i,l P (Xi F i,l)p(l)p(f i ) suppressive drive Bayesian inference and attention
52 P (X i ) = P ( X i ) P F i,l P (Xi F i,l)p(l)p(f i ) n PX P ( X i ) P o i F i,l P (Xi F i,l)p(l)p(f i ) McAdams and Maunsell 99 Model W attention P (L = x) 1 P (L = x) =1/ L Multiplicative scaling of tuning curves by spatial attention
53 P (X i ) = P ( X i ) P F i,l P (Xi F i,l)p(l)p(f i ) n PX P ( X i ) P o i F i,l P (Xi F i,l)p(l)p(f i ) Trujillo and Treue 02 Mc Adams and Maunsell 99 Contrast vs. response gain Predicted by Reynolds & Heeger 09
54 Learning to localize cars and pedestrians in street scenes O F i L X i N
55 Learning to localize cars and pedestrians in street scenes learning object priors O F i L X i N
56 Learning to localize cars and pedestrians in street scenes learning object priors O Context F i L X i N
57 Learning to localize cars and pedestrians in street scenes learning location priors (global contextual cues) learning object priors O Context F i L X i N (see Torralba et al)
58 The experiment Dataset: street-scenes images with cars & pedestrians and 20 without Experiment - 8 participants asked to count the number of cars/ pedestrians - Blocks/randomized presentations - Each image presented twice Eye movements recorded using an infra-red eye tracker Eye movements as proxy for attention
59 Predicting eye movements during searches for cars and pedestrians O P (L ) Context F i L X i Saliency map
60 Predicting eye movements during searches for cars and pedestrians O Context F i L X i N
61 Predicting eye movements during searches for cars and pedestrians O Context F i L X i N
62 Predicting eye movements during searches for cars and pedestrians O Context F i L X i Uniform priors (bottom-up) Feature priors Feature + contextual (spatial) priors Humans *similar (independent) results by Ehinger Hidalgo Torralba & Oliva (in press)
63 Predicting eye movements during searches for cars and pedestrians 1st three fixations O Context F i L X i Uniform priors (bottom-up) Feature priors Feature + contextual (spatial) priors Humans *similar (independent) results by Ehinger Hidalgo Torralba & Oliva (in press)
64 Predicting eye movements during searches for cars and pedestrians 1st three fixations O 1 Context 0.75 F i L 0.5 cars pedestrians X i Uniform priors (bottom-up) Feature priors Feature + contextual (spatial) priors Humans *similar (independent) results by Ehinger Hidalgo Torralba & Oliva (in press)
65 Predicting eye movements during searches for cars and pedestrians 1st three fixations O 1 Context 0.75 F i L 0.5 cars pedestrians X i Uniform priors (bottom-up) Feature priors Feature + contextual (spatial) priors Humans *similar (independent) results by Ehinger Hidalgo Torralba & Oliva (in press)
66 Predicting eye movements during searches for cars and pedestrians 1st three fixations O 1 Context 0.75 F i L 0.5 cars pedestrians X i Uniform priors (bottom-up) Feature priors Feature + contextual (spatial) priors Humans *similar (independent) results by Ehinger Hidalgo Torralba & Oliva (in press)
67 Predicting eye movements during searches for cars and pedestrians 1st three fixations O 1 Context 0.75 F i L 0.5 cars pedestrians X i Uniform priors (bottom-up) Feature priors Feature + contextual (spatial) priors Humans *similar (independent) results by Ehinger Hidalgo Torralba & Oliva (in press)
68 Predicting eye movements during searches for cars and pedestrians 1st three fixations O 1 Context 0.75 F i L 0.5 cars pedestrians X i Uniform priors (bottom-up) Feature priors Feature + contextual (spatial) priors Humans *similar (independent) results by Ehinger Hidalgo Torralba & Oliva (in press)
69 Predicting eye movements during searches for cars and pedestrians 1st three fixations O 1 Context 0.75 F i L 0.5 cars pedestrians X i Uniform priors (bottom-up) Feature priors Feature + contextual (spatial) priors Humans *similar (independent) results by Ehinger Hidalgo Torralba & Oliva (in press)
70 Predicting eye movements during searches for cars and pedestrians 1st three fixations O 1 Context 0.75 F i L 0.5 cars pedestrians X i Uniform priors (bottom-up) Feature priors Feature + contextual (spatial) priors Humans *similar (independent) results by Ehinger Hidalgo Torralba & Oliva (in press)
71 Predicting eye movements during searches for cars and pedestrians 1st three fixations O 1 Context 0.75 F i L 0.5 cars pedestrians X i Uniform priors (bottom-up) Feature priors Feature + contextual (spatial) priors Humans *similar (independent) results by Ehinger Hidalgo Torralba & Oliva (in press)
72 Predicting eye movements during searches for cars and pedestrians 1st three fixations O 1 Context 0.75 F i L 0.5 cars pedestrians X i Uniform priors (bottom-up) Feature priors Feature + contextual (spatial) priors Humans *similar (independent) results by Ehinger Hidalgo Torralba & Oliva (in press)
73 Predicting eye movements during searches for cars and pedestrians Overall model accounts for 92% of inter-subject agreement! 1 1st three fixations Context O 0.75 F i L 0.5 cars pedestrians X i Uniform priors (bottom-up) Feature priors Feature + contextual (spatial) priors Humans *similar (independent) results by Ehinger Hidalgo Torralba & Oliva (in press)
74 Predicting eye movements during free viewing Method ROC area O Bruce and Tsotos % tti et al % Proposed 77.9% F i L X i N human eye data from Bruce & Tsotsos
75 Summary
76 Summary Attention as part of the inference process that solves the visual recognition problem of what is where
77 Summary Attention as part of the inference process that solves the visual recognition problem of what is where Main goal of the visual system is to infer the identity and the position of objects in visual scenes: - Spatial attention emerges as a strategy to reduce the uncertainty in shape information while feature-based attention reduces the uncertainty in spatial information - Featural and spatial attention represent two distinct modes of a computational process solving the problem of recognizing and localizing objects, especially in difficult recognition tasks such as in cluttered natural scenes
78 Summary Attention as part of the inference process that solves the visual recognition problem of what is where Main goal of the visual system is to infer the identity and the position of objects in visual scenes: - Spatial attention emerges as a strategy to reduce the uncertainty in shape information while feature-based attention reduces the uncertainty in spatial information - Featural and spatial attention represent two distinct modes of a computational process solving the problem of recognizing and localizing objects, especially in difficult recognition tasks such as in cluttered natural scenes Model agnostic about the specific algorithm for the inference process (i.e., no claim made about the brain computing probabilities explicitly)
79 Two modes of vision
80 Two modes of vision Rapid bottom-up / feedforward processing during first ms of visual processing: - Coarse/initial base representation - Enables rapid object detection/recognition ( what is there? )
81 Two modes of vision Rapid bottom-up / feedforward processing during first ms of visual processing: - Coarse/initial base representation - Enables rapid object detection/recognition ( what is there? ) Top-down / re-entrant attentional processing - Enables recognition in clutter - Enables object localization
82 Collaborators (MT): Narcisse Bichot Sharat Chikkerur Bob Desimone Ethan Meyers Tomaso Poggio Cheston Tan Ying Zhang Robert J. and Nancy D. Carney Fund for Scientific nnovation Acknowledgments
What and where: A Bayesian inference theory of attention
What and where: A Bayesian inference theory of attention Sharat Chikkerur, Thomas Serre, Cheston Tan & Tomaso Poggio CBCL, McGovern Institute for Brain Research, MIT Preliminaries Outline Perception &
More informationNeural Networks. Henrik I. Christensen. Computer Science and Engineering University of California, San Diego
Neural Networks Henrik I. Christensen Computer Science and Engineering University of California, San Diego http://www.hichristensen.net Henrik I. Christensen (UCSD) Neural Networks 1 / 39 Introduction
More informationA Biologically-Inspired Model for Recognition of Overlapped Patterns
A Biologically-Inspired Model for Recognition of Overlapped Patterns Mohammad Saifullah Department of Computer and Information Science Linkoping University, Sweden Mohammad.saifullah@liu.se Abstract. In
More informationFundamentals of Computational Neuroscience 2e
Fundamentals of Computational Neuroscience 2e January 1, 2010 Chapter 10: The cognitive brain Hierarchical maps and attentive vision A. Ventral visual pathway B. Layered cortical maps Receptive field size
More informationLateral organization & computation
Lateral organization & computation review Population encoding & decoding lateral organization Efficient representations that reduce or exploit redundancy Fixation task 1rst order Retinotopic maps Log-polar
More informationThe functional organization of the visual cortex in primates
The functional organization of the visual cortex in primates Dominated by LGN M-cell input Drosal stream for motion perception & spatial localization V5 LIP/7a V2 V4 IT Ventral stream for object recognition
More informationFlexible Gating of Contextual Influences in Natural Vision. Odelia Schwartz University of Miami Oct 2015
Flexible Gating of Contextual Influences in Natural Vision Odelia Schwartz University of Miami Oct 05 Contextual influences Perceptual illusions: no man is an island.. Review paper on context: Schwartz,
More informationÂngelo Cardoso 27 May, Symbolic and Sub-Symbolic Learning Course Instituto Superior Técnico
BIOLOGICALLY INSPIRED COMPUTER MODELS FOR VISUAL RECOGNITION Ângelo Cardoso 27 May, 2010 Symbolic and Sub-Symbolic Learning Course Instituto Superior Técnico Index Human Vision Retinal Ganglion Cells Simple
More informationSUPPLEMENTARY INFORMATION
Supplementary discussion 1: Most excitatory and suppressive stimuli for model neurons The model allows us to determine, for each model neuron, the set of most excitatory and suppresive features. First,
More informationIs the Human Visual System Invariant to Translation and Scale?
The AAAI 207 Spring Symposium on Science of Intelligence: Computational Principles of Natural and Artificial Intelligence Technical Report SS-7-07 Is the Human Visual System Invariant to Translation and
More informationClass Learning Data Representations: beyond DeepLearning: the Magic Theory. Tomaso Poggio. Thursday, December 5, 13
Class 24-26 Learning Data Representations: beyond DeepLearning: the Magic Theory Tomaso Poggio Connection with the topic of learning theory 2 Notices of the American Mathematical Society (AMS), Vol. 50,
More informationBayesian Computation in Recurrent Neural Circuits
Bayesian Computation in Recurrent Neural Circuits Rajesh P. N. Rao Department of Computer Science and Engineering University of Washington Seattle, WA 98195 E-mail: rao@cs.washington.edu Appeared in: Neural
More informationBayesian probability theory and generative models
Bayesian probability theory and generative models Bruno A. Olshausen November 8, 2006 Abstract Bayesian probability theory provides a mathematical framework for peforming inference, or reasoning, using
More informationHierarchy. Will Penny. 24th March Hierarchy. Will Penny. Linear Models. Convergence. Nonlinear Models. References
24th March 2011 Update Hierarchical Model Rao and Ballard (1999) presented a hierarchical model of visual cortex to show how classical and extra-classical Receptive Field (RF) effects could be explained
More informationNeuronal Tuning: To Sharpen or Broaden?
NOTE Communicated by Laurence Abbott Neuronal Tuning: To Sharpen or Broaden? Kechen Zhang Howard Hughes Medical Institute, Computational Neurobiology Laboratory, Salk Institute for Biological Studies,
More informationAchievable rates for pattern recognition
Achievable rates for pattern recognition M Brandon Westover Joseph A O Sullivan Washington University in Saint Louis Departments of Physics and Electrical Engineering Goals of Information Theory Models
More informationOn The Equivalence of Hierarchical Temporal Memory and Neural Nets
On The Equivalence of Hierarchical Temporal Memory and Neural Nets Bedeho Mesghina Wolde Mender December 7, 2009 Abstract In this paper we present a rigorous definition of classification in a common family
More informationVision Modules and Cue Combination
Cue Coupling This section describes models for coupling different visual cues. The ideas in this section are logical extensions of the ideas in the earlier sections. But we are now addressing more complex
More informationModularity Matters: Learning Invariant Relational Reasoning Tasks
Summer Review 3 Modularity Matters: Learning Invariant Relational Reasoning Tasks Jason Jo 1, Vikas Verma 2, Yoshua Bengio 1 1: MILA, Universit de Montral 2: Department of Computer Science Aalto University,
More informationHow to make computers work like the brain
How to make computers work like the brain (without really solving the brain) Dileep George a single special machine can be made to do the work of all. It could in fact be made to work as a model of any
More informationACh, Uncertainty, and Cortical Inference
ACh, Uncertainty, and Cortical Inference Peter Dayan Angela Yu Gatsby Computational Neuroscience Unit 7 Queen Square, London, England, WCN AR. dayan@gatsby.ucl.ac.uk feraina@gatsby.ucl.ac.uk Abstract Acetylcholine
More informationComplexity of Representation and Inference in Compositional Models with Part Sharing
Journal of Machine Learning Research 17 (2016) 1-28 Submitted 7/13; Revised 5/15; Published 4/16 Complexity of Representation and Inference in Compositional Models with Part Sharing Alan Yuille Departments
More information+ + ( + ) = Linear recurrent networks. Simpler, much more amenable to analytic treatment E.g. by choosing
Linear recurrent networks Simpler, much more amenable to analytic treatment E.g. by choosing + ( + ) = Firing rates can be negative Approximates dynamics around fixed point Approximation often reasonable
More informationSpatial Representations in the Parietal Cortex May Use Basis Functions
Spatial Representations in the Parietal Cortex May Use Basis Functions Alexandre Pouget alex@salk.edu Terrence J. Sejnowski terry@salk.edu Howard Hughes Medical Institute The Salk Institute La Jolla, CA
More informationVisual motion processing and perceptual decision making
Visual motion processing and perceptual decision making Aziz Hurzook (ahurzook@uwaterloo.ca) Oliver Trujillo (otrujill@uwaterloo.ca) Chris Eliasmith (celiasmith@uwaterloo.ca) Centre for Theoretical Neuroscience,
More informationVentral Visual Stream and Deep Networks
Ma191b Winter 2017 Geometry of Neuroscience References for this lecture: Tomaso A. Poggio and Fabio Anselmi, Visual Cortex and Deep Networks, MIT Press, 2016 F. Cucker, S. Smale, On the mathematical foundations
More informationTowards Fully-automated Driving
Towards Fully-automated Driving Challenges and Potential Solutions Dr. Gijs Dubbelman Mobile Perception Systems EE-SPS/VCA Mobile Perception Systems 6 PhDs, postdoc, project manager, software engineer,
More informationBayesian Inference. Will Penny. 24th February Bayesian Inference. Will Penny. Bayesian Inference. References
24th February 2011 Given probabilities p(a), p(b), and the joint probability p(a, B), we can write the conditional probabilities p(b A) = p(a B) = p(a, B) p(a) p(a, B) p(b) Eliminating p(a, B) gives p(b
More informationExperimental design of fmri studies
Experimental design of fmri studies Sandra Iglesias With many thanks for slides & images to: Klaas Enno Stephan, FIL Methods group, Christian Ruff SPM Course 2015 Overview of SPM Image time-series Kernel
More informationGraphical Object Models for Detection and Tracking
Graphical Object Models for Detection and Tracking (ls@cs.brown.edu) Department of Computer Science Brown University Joined work with: -Ying Zhu, Siemens Corporate Research, Princeton, NJ -DorinComaniciu,
More informationTUTORIAL PART 1 Unsupervised Learning
TUTORIAL PART 1 Unsupervised Learning Marc'Aurelio Ranzato Department of Computer Science Univ. of Toronto ranzato@cs.toronto.edu Co-organizers: Honglak Lee, Yoshua Bengio, Geoff Hinton, Yann LeCun, Andrew
More informationComputing with Distributed Distributional Codes Convergent Inference in Brains and Machines?
Computing with Distributed Distributional Codes Convergent Inference in Brains and Machines? Maneesh Sahani Professor of Theoretical Neuroscience and Machine Learning Gatsby Computational Neuroscience
More informationFinding a Basis for the Neural State
Finding a Basis for the Neural State Chris Cueva ccueva@stanford.edu I. INTRODUCTION How is information represented in the brain? For example, consider arm movement. Neurons in dorsal premotor cortex (PMd)
More informationDecoding conceptual representations
Decoding conceptual representations!!!! Marcel van Gerven! Computational Cognitive Neuroscience Lab (www.ccnlab.net) Artificial Intelligence Department Donders Centre for Cognition Donders Institute for
More informationImplementing a Bayes Filter in a Neural Circuit: The Case of Unknown Stimulus Dynamics
Implementing a Bayes Filter in a Neural Circuit: The Case of Unknown Stimulus Dynamics arxiv:1512.07839v4 [cs.lg] 7 Jun 2017 Sacha Sokoloski Max Planck Institute for Mathematics in the Sciences Abstract
More informationCharles Cadieu, Minjoon Kouh, Anitha Pasupathy, Charles E. Connor, Maximilian Riesenhuber and Tomaso Poggio
Charles Cadieu, Minjoon Kouh, Anitha Pasupathy, Charles E. Connor, Maximilian Riesenhuber and Tomaso Poggio J Neurophysiol 98:733-75, 27. First published Jun 27, 27; doi:.52/jn.265.26 You might find this
More informationExperimental design of fmri studies
Experimental design of fmri studies Zurich SPM Course 2016 Sandra Iglesias Translational Neuromodeling Unit (TNU) Institute for Biomedical Engineering (IBT) University and ETH Zürich With many thanks for
More informationThe Bayesian Brain. Robert Jacobs Department of Brain & Cognitive Sciences University of Rochester. May 11, 2017
The Bayesian Brain Robert Jacobs Department of Brain & Cognitive Sciences University of Rochester May 11, 2017 Bayesian Brain How do neurons represent the states of the world? How do neurons represent
More informationRepresentation Learning in Sensory Cortex: a theory
CBMM Memo No. 026 November 14, 2014 Representation Learning in Sensory Cortex: a theory by Fabio Anselmi, and Tomaso Armando Poggio, Center for Brains, Minds and Machines, McGovern Institute, Massachusetts
More informationLearning Deep Architectures
Learning Deep Architectures Yoshua Bengio, U. Montreal Microsoft Cambridge, U.K. July 7th, 2009, Montreal Thanks to: Aaron Courville, Pascal Vincent, Dumitru Erhan, Olivier Delalleau, Olivier Breuleux,
More informationVisual Motion Analysis by a Neural Network
Visual Motion Analysis by a Neural Network Kansai University Takatsuki, Osaka 569 1095, Japan E-mail: fukushima@m.ieice.org (Submitted on December 12, 2006) Abstract In the visual systems of mammals, visual
More informationCOURSE INTRODUCTION. J. Elder CSE 6390/PSYC 6225 Computational Modeling of Visual Perception
COURSE INTRODUCTION COMPUTATIONAL MODELING OF VISUAL PERCEPTION 2 The goal of this course is to provide a framework and computational tools for modeling visual inference, motivated by interesting examples
More informationExperimental design of fmri studies & Resting-State fmri
Methods & Models for fmri Analysis 2016 Experimental design of fmri studies & Resting-State fmri Sandra Iglesias With many thanks for slides & images to: Klaas Enno Stephan, FIL Methods group, Christian
More informationDistinguish between different types of scenes. Matching human perception Understanding the environment
Scene Recognition Adriana Kovashka UTCS, PhD student Problem Statement Distinguish between different types of scenes Applications Matching human perception Understanding the environment Indexing of images
More informationEfficient Coding. Odelia Schwartz 2017
Efficient Coding Odelia Schwartz 2017 1 Levels of modeling Descriptive (what) Mechanistic (how) Interpretive (why) 2 Levels of modeling Fitting a receptive field model to experimental data (e.g., using
More informationExperimental design of fmri studies
Methods & Models for fmri Analysis 2017 Experimental design of fmri studies Sara Tomiello With many thanks for slides & images to: Sandra Iglesias, Klaas Enno Stephan, FIL Methods group, Christian Ruff
More informationCompetitive Learning for Deep Temporal Networks
Competitive Learning for Deep Temporal Networks Robert Gens Computer Science and Engineering University of Washington Seattle, WA 98195 rcg@cs.washington.edu Pedro Domingos Computer Science and Engineering
More informationUNSUPERVISED LEARNING
UNSUPERVISED LEARNING Topics Layer-wise (unsupervised) pre-training Restricted Boltzmann Machines Auto-encoders LAYER-WISE (UNSUPERVISED) PRE-TRAINING Breakthrough in 2006 Layer-wise (unsupervised) pre-training
More informationUnderstanding Aha! Moments
Understanding Aha! Moments Hermish Mehta Department of Electrical Engineering & Computer Sciences University of California, Berkeley Berkeley, CA 94720 hermish@berkeley.edu Abstract In this paper, we explore
More informationSkull-closed Autonomous Development: WWN-7 Dealing with Scales
Skull-closed Autonomous Development: WWN-7 Dealing with Scales Xiaofeng Wu, Qian Guo and Juyang Weng Abstract The Where-What Networks (WWNs) consist of a series of embodiments of a general-purpose brain-inspired
More informationattention mechanisms and generative models
attention mechanisms and generative models Master's Deep Learning Sergey Nikolenko Harbour Space University, Barcelona, Spain November 20, 2017 attention in neural networks attention You re paying attention
More informationModeling Surround Suppression in V1 Neurons with a Statistically-Derived Normalization Model
Presented at: NIPS-98, Denver CO, 1-3 Dec 1998. Pulished in: Advances in Neural Information Processing Systems eds. M. S. Kearns, S. A. Solla, and D. A. Cohn volume 11, pages 153--159 MIT Press, Cambridge,
More informationViewpoint invariant face recognition using independent component analysis and attractor networks
Viewpoint invariant face recognition using independent component analysis and attractor networks Marian Stewart Bartlett University of California San Diego The Salk Institute La Jolla, CA 92037 marni@salk.edu
More informationVariational Autoencoders. Presented by Alex Beatson Materials from Yann LeCun, Jaan Altosaar, Shakir Mohamed
Variational Autoencoders Presented by Alex Beatson Materials from Yann LeCun, Jaan Altosaar, Shakir Mohamed Contents 1. Why unsupervised learning, and why generative models? (Selected slides from Yann
More informationExperience Guided Search: A Bayesian Perspective on Cognitive Control
Collaborators Experience Guided Search: A Bayesian Perspective on Cognitive Control Michael C. Mozer Computer Science Department and Institute of Cognitive Science University of Colorado, Boulder Karthik
More informationModeling retinal high and low contrast sensitivity lters. T. Lourens. Abstract
Modeling retinal high and low contrast sensitivity lters T. Lourens Department of Computer Science University of Groningen P.O. Box 800, 9700 AV Groningen, The Netherlands E-mail: tino@cs.rug.nl Abstract
More informationA biologically plausible network for the computation of orientation dominance
A biologically plausible network for the computation of orientation dominance Kritika Muralidharan Statistical Visual Computing Laboratory University of California San Diego La Jolla, CA 9239 krmurali@ucsd.edu
More informationA Balanced Comparison of Object Invariances in Monkey IT Neurons
New Research Sensory and Motor Systems A Balanced Comparison of Object Invariances in Monkey IT Neurons N. Apurva Ratan Murty and Sripati P. Arun DOI:http://dx.doi.org/1.1523/ENEURO.333-16.217 Centre for
More informationA General Mechanism for Tuning: Gain Control Circuits and Synapses Underlie Tuning of Cortical Neurons
massachusetts institute of technology computer science and artificial intelligence laboratory A General Mechanism for Tuning: Gain Control Circuits and Synapses Underlie Tuning of Cortical Neurons Minjoon
More informationPosition Variance, Recurrence and Perceptual Learning
Position Variance, Recurrence and Perceptual Learning Zhaoping Li Peter Dayan Gatsby Computational Neuroscience Unit 7 Queen Square, London, England, WCN 3AR. zhaoping@gatsby.ucl.ac.uk dayan@gatsby.ucl.ac.uk
More informationTuning tuning curves. So far: Receptive fields Representation of stimuli Population vectors. Today: Contrast enhancment, cortical processing
Tuning tuning curves So far: Receptive fields Representation of stimuli Population vectors Today: Contrast enhancment, cortical processing Firing frequency N 3 s max (N 1 ) = 40 o N4 N 1 N N 5 2 s max
More informationCongruent and Opposite Neurons: Sisters for Multisensory Integration and Segregation
Congruent and Opposite Neurons: Sisters for Multisensory Integration and Segregation Wen-Hao Zhang 1,2, He Wang 1, K. Y. Michael Wong 1, Si Wu 2 wenhaoz@ust.hk, hwangaa@connect.ust.hk, phkywong@ust.hk,
More informationNonlinear reverse-correlation with synthesized naturalistic noise
Cognitive Science Online, Vol1, pp1 7, 2003 http://cogsci-onlineucsdedu Nonlinear reverse-correlation with synthesized naturalistic noise Hsin-Hao Yu Department of Cognitive Science University of California
More informationProbabilistic Inference of Hand Motion from Neural Activity in Motor Cortex
Probabilistic Inference of Hand Motion from Neural Activity in Motor Cortex Y Gao M J Black E Bienenstock S Shoham J P Donoghue Division of Applied Mathematics, Brown University, Providence, RI 292 Dept
More informationA Monte Carlo Sequential Estimation for Point Process Optimum Filtering
2006 International Joint Conference on Neural Networks Sheraton Vancouver Wall Centre Hotel, Vancouver, BC, Canada July 16-21, 2006 A Monte Carlo Sequential Estimation for Point Process Optimum Filtering
More informationInvariant object recognition in the visual system with error correction and temporal difference learning
INSTITUTE OF PHYSICS PUBLISHING NETWORK: COMPUTATION IN NEURAL SYSTEMS Network: Comput. Neural Syst. (00) 9 www.iop.org/journals/ne PII: S0954-898X(0)488-9 Invariant object recognition in the visual system
More informationAccuracy and learning in neuronal populations
M.A.L. Nicolelis (Ed.) Pmgress in Brain Research, Vol. 130 O 2001 Elsevier Science B.V. All rights reserved CHAPTER 21 Accuracy and learning in neuronal populations Kechen Zhang '.* and Terrence J. Sejnowski
More informationAveraging Points. What s the average of P and Q? v = Q - P. P + 0.5v = P + 0.5(Q P) = 0.5P Q
Linear Perceptron Averaging Points What s the average of P and Q? v = Q - P P Q P + 0.5v = P + 0.5(Q P) = 0.5P + 0.5 Q Averaging Points What s the average of P and Q? v = Q - P P Q Linear Interpolation
More informationBrain-Like Learning Directly from Dynamic Cluttered Natural Video
Brain-Like Learning Directly from Dynamic Cluttered Natural Video Yuekai Wang 1,2, Xiaofeng Wu 1,2, and Juyang Weng 3,4,5, 1 State Key Lab. of ASIC & System 2 Department of Electronic Engineering Fudan
More informationJan 16: The Visual System
Geometry of Neuroscience Matilde Marcolli & Doris Tsao Jan 16: The Visual System References for this lecture 1977 Hubel, D. H., Wiesel, T. N., Ferrier lecture 2010 Freiwald, W., Tsao, DY. Functional compartmentalization
More informationLearning and Memory in Neural Networks
Learning and Memory in Neural Networks Guy Billings, Neuroinformatics Doctoral Training Centre, The School of Informatics, The University of Edinburgh, UK. Neural networks consist of computational units
More informationUncertainty, precision, prediction errors and their relevance to computational psychiatry
Uncertainty, precision, prediction errors and their relevance to computational psychiatry Christoph Mathys Wellcome Trust Centre for Neuroimaging at UCL, London, UK Max Planck UCL Centre for Computational
More information9.01 Introduction to Neuroscience Fall 2007
MIT OpenCourseWare http://ocw.mit.edu 9.01 Introduction to Neuroscience Fall 2007 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. Complex cell receptive
More informationPILCO: A Model-Based and Data-Efficient Approach to Policy Search
PILCO: A Model-Based and Data-Efficient Approach to Policy Search (M.P. Deisenroth and C.E. Rasmussen) CSC2541 November 4, 2016 PILCO Graphical Model PILCO Probabilistic Inference for Learning COntrol
More informationMachine Learning and Cognitive Science
Machine Learning and Cognitive Science Josh Tenenbaum MIT Department of Brain and Cognitive Sciences CSAIL MLSS 2009 Cambridge, UK Human learning and machine learning: a long-term relationship Unsupervised
More informationShotgun. Auditory. Visual. p(a S) p(v S) p(x) p(x) s j
To appear in: International Joint Conference on Articial Intelligence 1997. Denver, CO: Morgan Kaufmann. Combining Probabilistic Population Codes Richard S. emel University of Arizona Tucson, A 85721 USA
More informationCS4495/6495 Introduction to Computer Vision. 8C-L3 Support Vector Machines
CS4495/6495 Introduction to Computer Vision 8C-L3 Support Vector Machines Discriminative classifiers Discriminative classifiers find a division (surface) in feature space that separates the classes Several
More informationPopulation Coding. Maneesh Sahani Gatsby Computational Neuroscience Unit University College London
Population Coding Maneesh Sahani maneesh@gatsby.ucl.ac.uk Gatsby Computational Neuroscience Unit University College London Term 1, Autumn 2010 Coding so far... Time-series for both spikes and stimuli Empirical
More informationCollective Dynamics in Human and Monkey Sensorimotor Cortex: Predicting Single Neuron Spikes
Collective Dynamics in Human and Monkey Sensorimotor Cortex: Predicting Single Neuron Spikes Supplementary Information Wilson Truccolo 1,2,5, Leigh R. Hochberg 2-6 and John P. Donoghue 4,1,2 1 Department
More informationLoopSOM: A Robust SOM Variant Using Self-Organizing Temporal Feedback Connections
LoopSOM: A Robust SOM Variant Using Self-Organizing Temporal Feedback Connections Rafael C. Pinto, Paulo M. Engel Instituto de Informática Universidade Federal do Rio Grande do Sul (UFRGS) P.O. Box 15.064
More informationNeuroinformatics. Marcus Kaiser. Week 10: Cortical maps and competitive population coding (textbook chapter 7)!
0 Neuroinformatics Marcus Kaiser Week 10: Cortical maps and competitive population coding (textbook chapter 7)! Outline Topographic maps Self-organizing maps Willshaw & von der Malsburg Kohonen Dynamic
More informationUnsupervised Learning of Invariant Representations in Hierarchical Architectures
Unsupervised Learning of nvariant Representations in Hierarchical Architectures Fabio Anselmi, Joel Z Leibo, Lorenzo Rosasco, Jim Mutch, Andrea Tacchetti, and Tomaso Poggio stituto taliano di Tecnologia,
More informationNeurons as Monte Carlo Samplers: Bayesian Inference and Learning in Spiking Networks
Neurons as Monte Carlo Samplers: Bayesian Inference and Learning in Spiking Networks Yanping Huang University of Washington huangyp@cs.uw.edu Rajesh P.N. Rao University of Washington rao@cs.uw.edu Abstract
More informationVisual meta-learning for planning and control
Visual meta-learning for planning and control Seminar on Current Works in Computer Vision @ Chair of Pattern Recognition and Image Processing. Samuel Roth Winter Semester 2018/19 Albert-Ludwigs-Universität
More informationSupervised Learning in Neural Networks
The Norwegian University of Science and Technology (NTNU Trondheim, Norway keithd@idi.ntnu.no March 7, 2011 Supervised Learning Constant feedback from an instructor, indicating not only right/wrong, but
More informationUnsupervised learning of invariant representations with low sample complexity: the magic of sensory cortex or a new framework for machine learning?
CBMM Memo No. 001 March 1, 014 arxiv:1311.4158v5 [cs.cv] 11 Mar 014 Unsupervised learning of invariant representations with low sample complexity: the magic of sensory cortex or a new framework for machine
More informationComplex Inference in Neural Circuits with Probabilistic Population Codes and Topic Models
Complex Inference in Neural Circuits with Probabilistic Population Codes and Topic Models Jeff Beck Department of Brain and Cognitive Sciences University of Rochester jbeck@bcs.rochester.edu Katherine
More informationClustering with k-means and Gaussian mixture distributions
Clustering with k-means and Gaussian mixture distributions Machine Learning and Category Representation 2012-2013 Jakob Verbeek, ovember 23, 2012 Course website: http://lear.inrialpes.fr/~verbeek/mlcr.12.13
More informationPerception of colour, form, depth and movement; organization of associative visual fields. Seminar Prof Maja Valić
Perception of colour, form, depth and movement; organization of associative visual fields Seminar Prof Maja Valić Perception of colour, form, depth and movement binding problem in the visual system: how
More informationThe wake-sleep algorithm for unsupervised neural networks
The wake-sleep algorithm for unsupervised neural networks Geoffrey E Hinton Peter Dayan Brendan J Frey Radford M Neal Department of Computer Science University of Toronto 6 King s College Road Toronto
More informationUnsupervised Learning of Hierarchical Models. in collaboration with Josh Susskind and Vlad Mnih
Unsupervised Learning of Hierarchical Models Marc'Aurelio Ranzato Geoff Hinton in collaboration with Josh Susskind and Vlad Mnih Advanced Machine Learning, 9 March 2011 Example: facial expression recognition
More informationTutorial on Methods for Interpreting and Understanding Deep Neural Networks. Part 3: Applications & Discussion
Tutorial on Methods for Interpreting and Understanding Deep Neural Networks W. Samek, G. Montavon, K.-R. Müller Part 3: Applications & Discussion ICASSP 2017 Tutorial W. Samek, G. Montavon & K.-R. Müller
More informationDerived Distance: Beyond a model, towards a theory
Derived Distance: Beyond a model, towards a theory 9.520 April 23 2008 Jake Bouvrie work with Steve Smale, Tomaso Poggio, Andrea Caponnetto and Lorenzo Rosasco Reference: Smale, S., T. Poggio, A. Caponnetto,
More informationPosition Variance, Recurrence and Perceptual Learning
Position Variance, Recurrence and Perceptual Learning Zhaoping Li Peter Dayan Gatsby Computational Neuroscience Unit 17 Queen Square, London, England, WCIN 3AR. zhaoping@ga t sby.ucl. a c.uk da y a n @gat
More informationVisual Recognition and Inference Using Dynamic Overcomplete Sparse Learning
LETTER Communicated by Rajesh Rao Visual Recognition and Inference Using Dynamic Overcomplete Sparse Learning Joseph F. Murray murrayjf@mit.edu Massachusetts Institute of Technology, Brain and Cognitive
More informationTHE retina in general consists of three layers: photoreceptors
CS229 MACHINE LEARNING, STANFORD UNIVERSITY, DECEMBER 2016 1 Models of Neuron Coding in Retinal Ganglion Cells and Clustering by Receptive Field Kevin Fegelis, SUID: 005996192, Claire Hebert, SUID: 006122438,
More informationarxiv: v2 [cs.ne] 22 Feb 2013
Sparse Penalty in Deep Belief Networks: Using the Mixed Norm Constraint arxiv:1301.3533v2 [cs.ne] 22 Feb 2013 Xanadu C. Halkias DYNI, LSIS, Universitè du Sud, Avenue de l Université - BP20132, 83957 LA
More informationRegML 2018 Class 8 Deep learning
RegML 2018 Class 8 Deep learning Lorenzo Rosasco UNIGE-MIT-IIT June 18, 2018 Supervised vs unsupervised learning? So far we have been thinking of learning schemes made in two steps f(x) = w, Φ(x) F, x
More informationDomain adaptation for deep learning
What you saw is not what you get Domain adaptation for deep learning Kate Saenko Successes of Deep Learning in AI A Learning Advance in Artificial Intelligence Rivals Human Abilities Deep Learning for
More information