Similarity alignment a new theory of neural computation. Dmitri Mitya Chklovskii

Size: px
Start display at page:

Download "Similarity alignment a new theory of neural computation. Dmitri Mitya Chklovskii"

Transcription

1 Similarity alignment a new theory of neural computation Dmitri Mitya Chklovskii Flatiron Institute Neuroscience Institute Simons Foundation NYU Medical Center

2 Imaging neural population activity 50μm Raw data: Yuste lab CaImAn software: Pnevmatikakis & Giovannucci (Chklovskii group, Flatiron Institute) Stimulus vector : xt Neural activity vector : yt

3 What does neural activity represent? What is neural computation?

4 The linear reconstruction view =. y t,1 +.. y t,2 +. y t,3 + stimulus features neural activities x t T = W y t Bialek, Atick, Abbott, Olshausen & Field, Problems with linear reconstruction Lack of invariance among different brains Leads to neural networks with nonbiological synaptic learning rules Unclear how to generate higher order features

5 Similarity of neural activity patterns in IT Human Cortex Similarity Kriegeskorte et. al., 2008 Kiani et. al., 2007

6 Similarity of neural activity patterns in IT Monkey Cortex Human Cortex Similarity Kriegeskorte et. al., 2008 Kiani et. al., 2007

7 [Neural] representation is representation of similarities S. Edelman, 1998 stimulus space, x t,2 x t,3 R nn activity space, R kk y t,2 x t,1 y t,1 Similar stimuli evoke similar activity patterns What metric should be used for stimulus similarity?

8 Similarity of neural activity patterns in V1 Kriegeskorte et. al., 2008 Similarity Kiani lab

9 Object classification as manifold learning DiCarlo & Cox, 2007 pixel intensity space V1 neural activity space IT neural activity space

10 Biologically plausible manifold learning LLE ISOMAP MVU KernelPCA t-sne BP Autoencoders Biologically implausible Input : Output : x x y y 1 T 1 min yt 0 T T T T T ( x ) 2 t xτ yt yτ t= 1 τ= 1 1 T yy T tt yy ττ xx T tt xx ττ

11 Optimization problem Deriving a neural network 1 min yt 0 T T T T T ( x ) 2 t xτ yt yτ 1 T T T T T T min 2 yt yτxτ xt + yt yτyτ yt yt 0 t= 1 τ= 1 T t= 1 τ= 1 T T T T 1 T T 1 T min 2 yt τ τ t t τ τ t t 0 y x x y y y y y = 1 τ= 1 τ= 1 + t T T T YX T YY min 2 y tw xt + ytw yt y 0 t Online algorithm ( y YX η ( W YY W ) ), ( yti, xt j- ), ( yti, yt j- ) y max + x y,0 t t t t W W +η W YX YX YX i, j i j, i, j W W +η W YY YY YY i, j i j, i, j Local learning rules! x t,1 x t,2 x t,3 x t,4 W YX Hebbian Neural network y t,1 y t,2 -W YY anti-hebbian Pehlevan & Chklovskii (2014) Pehlevan, Sengupta & Chklovskii (2017)

12 Experimental confirmation of the learning rule +η ( - ) YX YX YX i, j i, j t, i t, j i, j W W y x W Stationary point: synaptic weight ~ activity correlation WW YYYY 1 TT TT ττ=1 yy ττ xx ττ TT

13 Similarity matching network can cluster x 2 x t,1 x t,2 W YX x 1 -W YY y t,1 y t,2 y t,3 Pehlevan & Chklovskii (2014)

14 Clustering by similarity alignment 1 min yt 0 T T T T T ( x ) 2 t xτ yt yτ t= 1 τ= 1 1 animate inanimate neural activity patterns animate inanimate similarity

15 Similarity alignment network learns V1 features from natural images natural images W W Pehlevan & Chklovskii (2014)

16 Manifold learning and segregation by stacking similarity alignment layers Input Layer 1 (k=64) Layer 2 (k=32) Layer 3 (k=16) Layer 4 (k=8) Layer 5 (k=4) Layer 6 (k=2) Similarity matrices: 2-dimensional embeddings: Tepper, Sengupta & Chklovskii (2017)

17 Stacking similarity networks (in progress) Hebbian anti-hebbian DiCarlo & Cox, 2007

18 Family of similarity alignment networks COMPUTATIONAL OBJECTIVE Similarity alignment Nonnegativity constraint Rank and sparsity regularizers Constrained output correlation Constrained spectral norm of the output similarity matrix Copositive output similarity matrix Constrained L1-norm of activity BIOLOGICAL FEATURE Hebbian plasticity Neural rectification Adaptive neural thresholds Adaptive lateral weights Anti-Hebbian interneurons Anti-Hebbian inhibitory interneurons Giant interneuron Hu,Pehlevan & Chklovskii (2014), Pehlevan & Chklovskii (2014), Pehlevan, Hu, Chklovskii (2015), Pehlevan & Chklovskii (2015), Pehlevan & Chklovskii (2016), Pehlevan, Mohan & Chklovskii (2017), Pehlevan, Sengupta & Chklovskii (2017), Tepper, Sengupta & Chklovskii (2017), Pehlevan, Genkin & Chklovskii (2017)

19 Summary Similar stimuli evoke similar activity patterns Neural computation is local similarity alignment Machine Learning Algorithms Experimental Neuroscience Data

20 Acknowledgements Cengiz Pehlevan Mariano Tepper Alex Genkin Anirvan Sengupta Tao Hu Sreyas Mohan

Why do similarity matching objectives lead to Hebbian/anti-Hebbian networks?

Why do similarity matching objectives lead to Hebbian/anti-Hebbian networks? arxiv:1703.07914v2 [q-bio.nc] 11 Jul 2017 Why do similarity matching objectives lead to Hebbian/anti-Hebbian networks? Cengiz Pehlevan 1, Anirvan M. Sengupta 1,2, and Dmitri B. Chklovskii 1,3 1 Center

More information

arxiv: v1 [cs.lg] 11 Dec 2016

arxiv: v1 [cs.lg] 11 Dec 2016 Self-calibrating Neural Networks for Dimensionality Reduction uansi Chen 1,2, Cengiz Pehlevan 2, and Dmitri B Chklovskii 2,3 1 Statistics Department of University, California, Berkeley, CA 2 Center for

More information

RegML 2018 Class 8 Deep learning

RegML 2018 Class 8 Deep learning RegML 2018 Class 8 Deep learning Lorenzo Rosasco UNIGE-MIT-IIT June 18, 2018 Supervised vs unsupervised learning? So far we have been thinking of learning schemes made in two steps f(x) = w, Φ(x) F, x

More information

Adaptation in the Neural Code of the Retina

Adaptation in the Neural Code of the Retina Adaptation in the Neural Code of the Retina Lens Retina Fovea Optic Nerve Optic Nerve Bottleneck Neurons Information Receptors: 108 95% Optic Nerve 106 5% After Polyak 1941 Visual Cortex ~1010 Mean Intensity

More information

Nonlinear reverse-correlation with synthesized naturalistic noise

Nonlinear reverse-correlation with synthesized naturalistic noise Cognitive Science Online, Vol1, pp1 7, 2003 http://cogsci-onlineucsdedu Nonlinear reverse-correlation with synthesized naturalistic noise Hsin-Hao Yu Department of Cognitive Science University of California

More information

Population Coding in Retinal Ganglion Cells

Population Coding in Retinal Ganglion Cells Population Coding in Retinal Ganglion Cells Reza Abbasi Asl Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No. UCB/EECS-218-23 http://www2.eecs.berkeley.edu/pubs/techrpts/218/eecs-218-23.html

More information

Hold that thought: neural circuits supporting persistent percepts

Hold that thought: neural circuits supporting persistent percepts Hold that thought: neural circuits supporting persistent percepts Shaul Druckmann and Dmitri Mitya Chklovskii Janelia Farm Howard Hughes Medical Institute Working memory task a Firing rate Conventional

More information

Natural Image Statistics

Natural Image Statistics Natural Image Statistics A probabilistic approach to modelling early visual processing in the cortex Dept of Computer Science Early visual processing LGN V1 retina From the eye to the primary visual cortex

More information

Data dependent operators for the spatial-spectral fusion problem

Data dependent operators for the spatial-spectral fusion problem Data dependent operators for the spatial-spectral fusion problem Wien, December 3, 2012 Joint work with: University of Maryland: J. J. Benedetto, J. A. Dobrosotskaya, T. Doster, K. W. Duke, M. Ehler, A.

More information

Hierarchy. Will Penny. 24th March Hierarchy. Will Penny. Linear Models. Convergence. Nonlinear Models. References

Hierarchy. Will Penny. 24th March Hierarchy. Will Penny. Linear Models. Convergence. Nonlinear Models. References 24th March 2011 Update Hierarchical Model Rao and Ballard (1999) presented a hierarchical model of visual cortex to show how classical and extra-classical Receptive Field (RF) effects could be explained

More information

Synaptic Plasticity. Introduction. Biophysics of Synaptic Plasticity. Functional Modes of Synaptic Plasticity. Activity-dependent synaptic plasticity:

Synaptic Plasticity. Introduction. Biophysics of Synaptic Plasticity. Functional Modes of Synaptic Plasticity. Activity-dependent synaptic plasticity: Synaptic Plasticity Introduction Dayan and Abbott (2001) Chapter 8 Instructor: Yoonsuck Choe; CPSC 644 Cortical Networks Activity-dependent synaptic plasticity: underlies learning and memory, and plays

More information

Neuroscience Introduction

Neuroscience Introduction Neuroscience Introduction The brain As humans, we can identify galaxies light years away, we can study particles smaller than an atom. But we still haven t unlocked the mystery of the three pounds of matter

More information

Invariant object recognition in the visual system with error correction and temporal difference learning

Invariant object recognition in the visual system with error correction and temporal difference learning INSTITUTE OF PHYSICS PUBLISHING NETWORK: COMPUTATION IN NEURAL SYSTEMS Network: Comput. Neural Syst. (00) 9 www.iop.org/journals/ne PII: S0954-898X(0)488-9 Invariant object recognition in the visual system

More information

Iterative face image feature extraction with Generalized Hebbian Algorithm and a Sanger-like BCM rule

Iterative face image feature extraction with Generalized Hebbian Algorithm and a Sanger-like BCM rule Iterative face image feature extraction with Generalized Hebbian Algorithm and a Sanger-like BCM rule Clayton Aldern (Clayton_Aldern@brown.edu) Tyler Benster (Tyler_Benster@brown.edu) Carl Olsson (Carl_Olsson@brown.edu)

More information

An objective function for self-limiting neural plasticity rules.

An objective function for self-limiting neural plasticity rules. ESANN 5 proceedings, European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning. Bruges (Belgium), -4 April 5, i6doc.com publ., ISBN 978-875874-8. An objective function

More information

THE retina in general consists of three layers: photoreceptors

THE retina in general consists of three layers: photoreceptors CS229 MACHINE LEARNING, STANFORD UNIVERSITY, DECEMBER 2016 1 Models of Neuron Coding in Retinal Ganglion Cells and Clustering by Receptive Field Kevin Fegelis, SUID: 005996192, Claire Hebert, SUID: 006122438,

More information

Dimensionality Reduction by Unsupervised Regression

Dimensionality Reduction by Unsupervised Regression Dimensionality Reduction by Unsupervised Regression Miguel Á. Carreira-Perpiñán, EECS, UC Merced http://faculty.ucmerced.edu/mcarreira-perpinan Zhengdong Lu, CSEE, OGI http://www.csee.ogi.edu/~zhengdon

More information

arxiv: v1 [q-bio.nc] 4 Jan 2016

arxiv: v1 [q-bio.nc] 4 Jan 2016 Nonlinear Hebbian learning as a unifying principle in receptive field formation Carlos S. N. Brito*, Wulfram Gerstner arxiv:1601.00701v1 [q-bio.nc] 4 Jan 2016 School of Computer and Communication Sciences

More information

Probabilistic Models in Theoretical Neuroscience

Probabilistic Models in Theoretical Neuroscience Probabilistic Models in Theoretical Neuroscience visible unit Boltzmann machine semi-restricted Boltzmann machine restricted Boltzmann machine hidden unit Neural models of probabilistic sampling: introduction

More information

Lecture 6. Notes on Linear Algebra. Perceptron

Lecture 6. Notes on Linear Algebra. Perceptron Lecture 6. Notes on Linear Algebra. Perceptron COMP90051 Statistical Machine Learning Semester 2, 2017 Lecturer: Andrey Kan Copyright: University of Melbourne This lecture Notes on linear algebra Vectors

More information

Discriminative Direction for Kernel Classifiers

Discriminative Direction for Kernel Classifiers Discriminative Direction for Kernel Classifiers Polina Golland Artificial Intelligence Lab Massachusetts Institute of Technology Cambridge, MA 02139 polina@ai.mit.edu Abstract In many scientific and engineering

More information

The Rectified Gaussian Distribution

The Rectified Gaussian Distribution The Rectified Gaussian Distribution N. D. Socci, D. D. Lee and H. S. Seung Bell Laboratories, Lucent Technologies Murray Hill, NJ 07974 {ndslddleelseung}~bell-labs.com Abstract A simple but powerful modification

More information

Learning Eigenfunctions: Links with Spectral Clustering and Kernel PCA

Learning Eigenfunctions: Links with Spectral Clustering and Kernel PCA Learning Eigenfunctions: Links with Spectral Clustering and Kernel PCA Yoshua Bengio Pascal Vincent Jean-François Paiement University of Montreal April 2, Snowbird Learning 2003 Learning Modal Structures

More information

Nonlinear Dimensionality Reduction. Jose A. Costa

Nonlinear Dimensionality Reduction. Jose A. Costa Nonlinear Dimensionality Reduction Jose A. Costa Mathematics of Information Seminar, Dec. Motivation Many useful of signals such as: Image databases; Gene expression microarrays; Internet traffic time

More information

High-dimensional geometry of cortical population activity. Marius Pachitariu University College London

High-dimensional geometry of cortical population activity. Marius Pachitariu University College London High-dimensional geometry of cortical population activity Marius Pachitariu University College London Part I: introduction to the brave new world of large-scale neuroscience Part II: large-scale data preprocessing

More information

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann (Feed-Forward) Neural Networks 2016-12-06 Dr. Hajira Jabeen, Prof. Jens Lehmann Outline In the previous lectures we have learned about tensors and factorization methods. RESCAL is a bilinear model for

More information

Causality and communities in neural networks

Causality and communities in neural networks Causality and communities in neural networks Leonardo Angelini, Daniele Marinazzo, Mario Pellicoro, Sebastiano Stramaglia TIRES-Center for Signal Detection and Processing - Università di Bari, Bari, Italy

More information

Artificial Neural Networks D B M G. Data Base and Data Mining Group of Politecnico di Torino. Elena Baralis. Politecnico di Torino

Artificial Neural Networks D B M G. Data Base and Data Mining Group of Politecnico di Torino. Elena Baralis. Politecnico di Torino Artificial Neural Networks Data Base and Data Mining Group of Politecnico di Torino Elena Baralis Politecnico di Torino Artificial Neural Networks Inspired to the structure of the human brain Neurons as

More information

Natural Gradient Learning for Over- and Under-Complete Bases in ICA

Natural Gradient Learning for Over- and Under-Complete Bases in ICA NOTE Communicated by Jean-François Cardoso Natural Gradient Learning for Over- and Under-Complete Bases in ICA Shun-ichi Amari RIKEN Brain Science Institute, Wako-shi, Hirosawa, Saitama 351-01, Japan Independent

More information

SPIKE TRIGGERED APPROACHES. Odelia Schwartz Computational Neuroscience Course 2017

SPIKE TRIGGERED APPROACHES. Odelia Schwartz Computational Neuroscience Course 2017 SPIKE TRIGGERED APPROACHES Odelia Schwartz Computational Neuroscience Course 2017 LINEAR NONLINEAR MODELS Linear Nonlinear o Often constrain to some form of Linear, Nonlinear computations, e.g. visual

More information

Neural Coding: Integrate-and-Fire Models of Single and Multi-Neuron Responses

Neural Coding: Integrate-and-Fire Models of Single and Multi-Neuron Responses Neural Coding: Integrate-and-Fire Models of Single and Multi-Neuron Responses Jonathan Pillow HHMI and NYU http://www.cns.nyu.edu/~pillow Oct 5, Course lecture: Computational Modeling of Neuronal Systems

More information

Analysis of an Attractor Neural Network s Response to Conflicting External Inputs

Analysis of an Attractor Neural Network s Response to Conflicting External Inputs Journal of Mathematical Neuroscience (2018) 8:6 https://doi.org/10.1186/s13408-018-0061-0 RESEARCH OpenAccess Analysis of an Attractor Neural Network s Response to Conflicting External Inputs Kathryn Hedrick

More information

Introduction to Machine Learning

Introduction to Machine Learning 10-701 Introduction to Machine Learning PCA Slides based on 18-661 Fall 2018 PCA Raw data can be Complex, High-dimensional To understand a phenomenon we measure various related quantities If we knew what

More information

Neural Networks. Mark van Rossum. January 15, School of Informatics, University of Edinburgh 1 / 28

Neural Networks. Mark van Rossum. January 15, School of Informatics, University of Edinburgh 1 / 28 1 / 28 Neural Networks Mark van Rossum School of Informatics, University of Edinburgh January 15, 2018 2 / 28 Goals: Understand how (recurrent) networks behave Find a way to teach networks to do a certain

More information

1 h 9 e $ s i n t h e o r y, a p p l i c a t i a n

1 h 9 e $ s i n t h e o r y, a p p l i c a t i a n T : 99 9 \ E \ : \ 4 7 8 \ \ \ \ - \ \ T \ \ \ : \ 99 9 T : 99-9 9 E : 4 7 8 / T V 9 \ E \ \ : 4 \ 7 8 / T \ V \ 9 T - w - - V w w - T w w \ T \ \ \ w \ w \ - \ w \ \ w \ \ \ T \ w \ w \ w \ w \ \ w \

More information

c Springer, Reprinted with permission.

c Springer, Reprinted with permission. Zhijian Yuan and Erkki Oja. A FastICA Algorithm for Non-negative Independent Component Analysis. In Puntonet, Carlos G.; Prieto, Alberto (Eds.), Proceedings of the Fifth International Symposium on Independent

More information

Optimal In-Place Self-Organization for Cortical Development: Limited Cells, Sparse Coding and Cortical Topography

Optimal In-Place Self-Organization for Cortical Development: Limited Cells, Sparse Coding and Cortical Topography Optimal In-Place Self-Organization for Cortical Development: Limited Cells, Sparse Coding and Cortical Topography Juyang Weng and Matthew D. Luciw Department of Computer Science and Engineering Michigan

More information

Nonlinear Dimensionality Reduction

Nonlinear Dimensionality Reduction Outline Hong Chang Institute of Computing Technology, Chinese Academy of Sciences Machine Learning Methods (Fall 2012) Outline Outline I 1 Kernel PCA 2 Isomap 3 Locally Linear Embedding 4 Laplacian Eigenmap

More information

On Spectral Basis Selection for Single Channel Polyphonic Music Separation

On Spectral Basis Selection for Single Channel Polyphonic Music Separation On Spectral Basis Selection for Single Channel Polyphonic Music Separation Minje Kim and Seungjin Choi Department of Computer Science Pohang University of Science and Technology San 31 Hyoja-dong, Nam-gu

More information

Collecting Aligned Activity & Connectomic Data Example: Mouse Vibrissal Touch Barrel Cortex Exploiting Coherence to Reduce Dimensionality Example: C.

Collecting Aligned Activity & Connectomic Data Example: Mouse Vibrissal Touch Barrel Cortex Exploiting Coherence to Reduce Dimensionality Example: C. Collecting Aligned Activity & Connectomic Data Example: Mouse Vibrissal Touch Barrel Cortex Exploiting Coherence to Reduce Dimensionality Example: C. elegans Motor Control Sequence Spatially & Temporally

More information

Sparse Coding as a Generative Model

Sparse Coding as a Generative Model Sparse Coding as a Generative Model image vector neural activity (sparse) feature vector other stuff Find activations by descending E Coefficients via gradient descent Driving input (excitation) Lateral

More information

Ângelo Cardoso 27 May, Symbolic and Sub-Symbolic Learning Course Instituto Superior Técnico

Ângelo Cardoso 27 May, Symbolic and Sub-Symbolic Learning Course Instituto Superior Técnico BIOLOGICALLY INSPIRED COMPUTER MODELS FOR VISUAL RECOGNITION Ângelo Cardoso 27 May, 2010 Symbolic and Sub-Symbolic Learning Course Instituto Superior Técnico Index Human Vision Retinal Ganglion Cells Simple

More information

Biological Modeling of Neural Networks:

Biological Modeling of Neural Networks: Week 14 Dynamics and Plasticity 14.1 Reservoir computing - Review:Random Networks - Computing with rich dynamics Biological Modeling of Neural Networks: 14.2 Random Networks - stationary state - chaos

More information

Robust regression and non-linear kernel methods for characterization of neuronal response functions from limited data

Robust regression and non-linear kernel methods for characterization of neuronal response functions from limited data Robust regression and non-linear kernel methods for characterization of neuronal response functions from limited data Maneesh Sahani Gatsby Computational Neuroscience Unit University College, London Jennifer

More information

Dynamic Causal Modelling for EEG and MEG. Stefan Kiebel

Dynamic Causal Modelling for EEG and MEG. Stefan Kiebel Dynamic Causal Modelling for EEG and MEG Stefan Kiebel Overview 1 M/EEG analysis 2 Dynamic Causal Modelling Motivation 3 Dynamic Causal Modelling Generative model 4 Bayesian inference 5 Applications Overview

More information

Statistical and Computational Analysis of Locality Preserving Projection

Statistical and Computational Analysis of Locality Preserving Projection Statistical and Computational Analysis of Locality Preserving Projection Xiaofei He xiaofei@cs.uchicago.edu Department of Computer Science, University of Chicago, 00 East 58th Street, Chicago, IL 60637

More information

PML - Report. Plastic State Machines: From Neuroscience to Machine Learning

PML - Report. Plastic State Machines: From Neuroscience to Machine Learning PML - Report Plastic State Machines: From Neuroscience to Machine Learning Thomas Rost 1 1 Neuroinformatics, Institut of Biology Freie Universitt Berlin, Germany April 30, 2014 Contents Contents 1 1 Introduction

More information

Plasticity and Learning

Plasticity and Learning Chapter 8 Plasticity and Learning 8.1 Introduction Activity-dependent synaptic plasticity is widely believed to be the basic phenomenon underlying learning and memory, and it is also thought to play a

More information

Classic K -means clustering. Classic K -means example (K = 2) Finding the optimal w k. Finding the optimal s n J =

Classic K -means clustering. Classic K -means example (K = 2) Finding the optimal w k. Finding the optimal s n J = Review of classic (GOF K -means clustering x 2 Fall 2015 x 1 Lecture 8, February 24, 2015 K-means is traditionally a clustering algorithm. Learning: Fit K prototypes w k (the rows of some matrix, W to

More information

Unsupervised learning of an efficient short-term memory network

Unsupervised learning of an efficient short-term memory network Unsupervised learning of an efficient short-term memory network Pietro Vertechi Wieland Brendel Christian K. Machens Champalimaud Neuroscience Programme Champalimaud Centre for the Unknown Lisbon, Portugal

More information

Information maximization in a network of linear neurons

Information maximization in a network of linear neurons Information maximization in a network of linear neurons Holger Arnold May 30, 005 1 Introduction It is known since the work of Hubel and Wiesel [3], that many cells in the early visual areas of mammals

More information

L26: Advanced dimensionality reduction

L26: Advanced dimensionality reduction L26: Advanced dimensionality reduction The snapshot CA approach Oriented rincipal Components Analysis Non-linear dimensionality reduction (manifold learning) ISOMA Locally Linear Embedding CSCE 666 attern

More information

How to do backpropagation in a brain. Geoffrey Hinton Canadian Institute for Advanced Research & University of Toronto

How to do backpropagation in a brain. Geoffrey Hinton Canadian Institute for Advanced Research & University of Toronto 1 How to do backpropagation in a brain Geoffrey Hinton Canadian Institute for Advanced Research & University of Toronto What is wrong with back-propagation? It requires labeled training data. (fixed) Almost

More information

Structured matrix factorizations. Example: Eigenfaces

Structured matrix factorizations. Example: Eigenfaces Structured matrix factorizations Example: Eigenfaces An extremely large variety of interesting and important problems in machine learning can be formulated as: Given a matrix, find a matrix and a matrix

More information

Linear Heteroencoders

Linear Heteroencoders Gatsby Computational Neuroscience Unit 17 Queen Square, London University College London WC1N 3AR, United Kingdom http://www.gatsby.ucl.ac.uk +44 20 7679 1176 Funded in part by the Gatsby Charitable Foundation.

More information

Do retinal ganglion cells project natural scenes to their principal subspace and whiten them?

Do retinal ganglion cells project natural scenes to their principal subspace and whiten them? Do retinal ganglion cells project natural scenes to their principal subspace and whiten them? Reza bbasi-sl, engiz Pehlevan, in Yu, and Dmitri hklovskii Department of Electrical Engineering and omputer

More information

Basic Principles of Unsupervised and Unsupervised

Basic Principles of Unsupervised and Unsupervised Basic Principles of Unsupervised and Unsupervised Learning Toward Deep Learning Shun ichi Amari (RIKEN Brain Science Institute) collaborators: R. Karakida, M. Okada (U. Tokyo) Deep Learning Self Organization

More information

Probabilistic & Unsupervised Learning

Probabilistic & Unsupervised Learning Probabilistic & Unsupervised Learning Gaussian Processes Maneesh Sahani maneesh@gatsby.ucl.ac.uk Gatsby Computational Neuroscience Unit, and MSc ML/CSML, Dept Computer Science University College London

More information

Course 495: Advanced Statistical Machine Learning/Pattern Recognition

Course 495: Advanced Statistical Machine Learning/Pattern Recognition Course 495: Advanced Statistical Machine Learning/Pattern Recognition Deterministic Component Analysis Goal (Lecture): To present standard and modern Component Analysis (CA) techniques such as Principal

More information

arxiv: v3 [q-bio.nc] 7 Apr 2017

arxiv: v3 [q-bio.nc] 7 Apr 2017 Towards deep learning with segregated dendrites Jordan Guergiuev 1,2, Timothy P. Lillicrap 4, and Blake A. Richards 1,2,3,* arxiv:1610.00161v3 [q-bio.nc] 7 Apr 2017 1 Department of Biological Sciences,

More information

Short Term Memory Quantifications in Input-Driven Linear Dynamical Systems

Short Term Memory Quantifications in Input-Driven Linear Dynamical Systems Short Term Memory Quantifications in Input-Driven Linear Dynamical Systems Peter Tiňo and Ali Rodan School of Computer Science, The University of Birmingham Birmingham B15 2TT, United Kingdom E-mail: {P.Tino,

More information

A Biologically-Inspired Model for Recognition of Overlapped Patterns

A Biologically-Inspired Model for Recognition of Overlapped Patterns A Biologically-Inspired Model for Recognition of Overlapped Patterns Mohammad Saifullah Department of Computer and Information Science Linkoping University, Sweden Mohammad.saifullah@liu.se Abstract. In

More information

How to do backpropagation in a brain

How to do backpropagation in a brain How to do backpropagation in a brain Geoffrey Hinton Canadian Institute for Advanced Research & University of Toronto & Google Inc. Prelude I will start with three slides explaining a popular type of deep

More information

Compressive Sensing and Beyond

Compressive Sensing and Beyond Compressive Sensing and Beyond Sohail Bahmani Gerorgia Tech. Signal Processing Compressed Sensing Signal Models Classics: bandlimited The Sampling Theorem Any signal with bandwidth B can be recovered

More information

Sum-of-Squares Method, Tensor Decomposition, Dictionary Learning

Sum-of-Squares Method, Tensor Decomposition, Dictionary Learning Sum-of-Squares Method, Tensor Decomposition, Dictionary Learning David Steurer Cornell Approximation Algorithms and Hardness, Banff, August 2014 for many problems (e.g., all UG-hard ones): better guarantees

More information

Dimension Reduction and Low-dimensional Embedding

Dimension Reduction and Low-dimensional Embedding Dimension Reduction and Low-dimensional Embedding Ying Wu Electrical Engineering and Computer Science Northwestern University Evanston, IL 60208 http://www.eecs.northwestern.edu/~yingwu 1/26 Dimension

More information

Infinite systems of interacting chains with memory of variable length - a stochastic model for biological neural nets

Infinite systems of interacting chains with memory of variable length - a stochastic model for biological neural nets Infinite systems of interacting chains with memory of variable length - a stochastic model for biological neural nets Antonio Galves Universidade de S.Paulo Fapesp Center for Neuromathematics Eurandom,

More information

Leo Kadanoff and 2d XY Models with Symmetry-Breaking Fields. renormalization group study of higher order gradients, cosines and vortices

Leo Kadanoff and 2d XY Models with Symmetry-Breaking Fields. renormalization group study of higher order gradients, cosines and vortices Leo Kadanoff and d XY Models with Symmetry-Breaking Fields renormalization group study of higher order gradients, cosines and vortices Leo Kadanoff and Random Matrix Theory Non-Hermitian Localization in

More information

Encoding of Multivariate Stimuli with MIMO Neural Circuits

Encoding of Multivariate Stimuli with MIMO Neural Circuits 20 IEEE International Symposium on Information Theory Proceedings Encoding of Multivariate Stimuli with MIMO Neural Circuits Aurel A. Lazar Department of Electrical Engineering Columbia University, New

More information

Consider the following spike trains from two different neurons N1 and N2:

Consider the following spike trains from two different neurons N1 and N2: About synchrony and oscillations So far, our discussions have assumed that we are either observing a single neuron at a, or that neurons fire independent of each other. This assumption may be correct in

More information

Decoding conceptual representations

Decoding conceptual representations Decoding conceptual representations!!!! Marcel van Gerven! Computational Cognitive Neuroscience Lab (www.ccnlab.net) Artificial Intelligence Department Donders Centre for Cognition Donders Institute for

More information

Gatsby Theoretical Neuroscience Lectures: Non-Gaussian statistics and natural images Parts III-IV

Gatsby Theoretical Neuroscience Lectures: Non-Gaussian statistics and natural images Parts III-IV Gatsby Theoretical Neuroscience Lectures: Non-Gaussian statistics and natural images Parts III-IV Aapo Hyvärinen Gatsby Unit University College London Part III: Estimation of unnormalized models Often,

More information

Viewpoint invariant face recognition using independent component analysis and attractor networks

Viewpoint invariant face recognition using independent component analysis and attractor networks Viewpoint invariant face recognition using independent component analysis and attractor networks Marian Stewart Bartlett University of California San Diego The Salk Institute La Jolla, CA 92037 marni@salk.edu

More information

Compressed Sensing, Sparsity, and Dimensionality in Neuronal Information Processing and Data Analysis

Compressed Sensing, Sparsity, and Dimensionality in Neuronal Information Processing and Data Analysis Annu. Rev. Neurosci. 2012. 35:485 508 First published online as a Review in Advance on April 5, 2012 The Annual Review of Neuroscience is online at neuro.annualreviews.org This article s doi: 10.1146/annurev-neuro-062111

More information

CSE 291. Assignment Spectral clustering versus k-means. Out: Wed May 23 Due: Wed Jun 13

CSE 291. Assignment Spectral clustering versus k-means. Out: Wed May 23 Due: Wed Jun 13 CSE 291. Assignment 3 Out: Wed May 23 Due: Wed Jun 13 3.1 Spectral clustering versus k-means Download the rings data set for this problem from the course web site. The data is stored in MATLAB format as

More information

Tuning tuning curves. So far: Receptive fields Representation of stimuli Population vectors. Today: Contrast enhancment, cortical processing

Tuning tuning curves. So far: Receptive fields Representation of stimuli Population vectors. Today: Contrast enhancment, cortical processing Tuning tuning curves So far: Receptive fields Representation of stimuli Population vectors Today: Contrast enhancment, cortical processing Firing frequency N 3 s max (N 1 ) = 40 o N4 N 1 N N 5 2 s max

More information

Learning Spatio-Temporally Encoded Pattern Transformations in Structured Spiking Neural Networks 12

Learning Spatio-Temporally Encoded Pattern Transformations in Structured Spiking Neural Networks 12 Learning Spatio-Temporally Encoded Pattern Transformations in Structured Spiking Neural Networks 12 André Grüning, Brian Gardner and Ioana Sporea Department of Computer Science University of Surrey Guildford,

More information

Linear Models for Classification

Linear Models for Classification Linear Models for Classification Henrik I Christensen Robotics & Intelligent Machines @ GT Georgia Institute of Technology, Atlanta, GA 30332-0280 hic@cc.gatech.edu Henrik I Christensen (RIM@GT) Linear

More information

Statistical Pattern Recognition

Statistical Pattern Recognition Statistical Pattern Recognition Feature Extraction Hamid R. Rabiee Jafar Muhammadi, Alireza Ghasemi, Payam Siyari Spring 2014 http://ce.sharif.edu/courses/92-93/2/ce725-2/ Agenda Dimensionality Reduction

More information

Artificial Neural Networks (ANN) Xiaogang Su, Ph.D. Department of Mathematical Science University of Texas at El Paso

Artificial Neural Networks (ANN) Xiaogang Su, Ph.D. Department of Mathematical Science University of Texas at El Paso Artificial Neural Networks (ANN) Xiaogang Su, Ph.D. Department of Mathematical Science University of Texas at El Paso xsu@utep.edu Fall, 2018 Outline Introduction A Brief History ANN Architecture Terminology

More information

Introduction to representational similarity analysis

Introduction to representational similarity analysis Introduction to representational similarity analysis Nikolaus Kriegeskorte MRC Cognition and Brain Sciences Unit Cambridge RSA Workshop, 16-17 February 2015 a c t i v i t y d i s s i m i l a r i t y Representational

More information

Machine Learning. CUNY Graduate Center, Spring Lectures 11-12: Unsupervised Learning 1. Professor Liang Huang.

Machine Learning. CUNY Graduate Center, Spring Lectures 11-12: Unsupervised Learning 1. Professor Liang Huang. Machine Learning CUNY Graduate Center, Spring 2013 Lectures 11-12: Unsupervised Learning 1 (Clustering: k-means, EM, mixture models) Professor Liang Huang huang@cs.qc.cuny.edu http://acl.cs.qc.edu/~lhuang/teaching/machine-learning

More information

Representational similarity analysis. Nikolaus Kriegeskorte MRC Cognition and Brain Sciences Unit Cambridge, UK

Representational similarity analysis. Nikolaus Kriegeskorte MRC Cognition and Brain Sciences Unit Cambridge, UK Representational similarity analysis Nikolaus Kriegeskorte MRC Cognition and Brain Sciences Unit Cambridge, UK a c t i v i t y d i s s i m i l a r i t y Representational similarity analysis stimulus (e.g.

More information

Discriminant Uncorrelated Neighborhood Preserving Projections

Discriminant Uncorrelated Neighborhood Preserving Projections Journal of Information & Computational Science 8: 14 (2011) 3019 3026 Available at http://www.joics.com Discriminant Uncorrelated Neighborhood Preserving Projections Guoqiang WANG a,, Weijuan ZHANG a,

More information

Associative Memories (I) Hopfield Networks

Associative Memories (I) Hopfield Networks Associative Memories (I) Davide Bacciu Dipartimento di Informatica Università di Pisa bacciu@di.unipi.it Applied Brain Science - Computational Neuroscience (CNS) A Pun Associative Memories Introduction

More information

Bayesian Computation Emerges in Generic Cortical Microcircuits through Spike-Timing-Dependent Plasticity

Bayesian Computation Emerges in Generic Cortical Microcircuits through Spike-Timing-Dependent Plasticity Bayesian Computation Emerges in Generic Cortical Microcircuits through Spike-Timing-Dependent Plasticity Bernhard Nessler 1 *, Michael Pfeiffer 1,2, Lars Buesing 1, Wolfgang Maass 1 1 Institute for Theoretical

More information

Classification on Pairwise Proximity Data

Classification on Pairwise Proximity Data Classification on Pairwise Proximity Data Thore Graepel, Ralf Herbrich, Peter Bollmann-Sdorra y and Klaus Obermayer yy This paper is a submission to the Neural Information Processing Systems 1998. Technical

More information

Challenges and opportunities in statistical neuroscience

Challenges and opportunities in statistical neuroscience Challenges and opportunities in statistical neuroscience Liam Paninski Department of Statistics Center for Theoretical Neuroscience Grossman Center for the Statistics of Mind Columbia University http://www.stat.columbia.edu/

More information

Decoding. How well can we learn what the stimulus is by looking at the neural responses?

Decoding. How well can we learn what the stimulus is by looking at the neural responses? Decoding How well can we learn what the stimulus is by looking at the neural responses? Two approaches: devise explicit algorithms for extracting a stimulus estimate directly quantify the relationship

More information

An Introductory Course in Computational Neuroscience

An Introductory Course in Computational Neuroscience An Introductory Course in Computational Neuroscience Contents Series Foreword Acknowledgments Preface 1 Preliminary Material 1.1. Introduction 1.1.1 The Cell, the Circuit, and the Brain 1.1.2 Physics of

More information

April 9, Depto. de Ing. de Sistemas e Industrial Universidad Nacional de Colombia, Bogotá. Linear Classification Models. Fabio A. González Ph.D.

April 9, Depto. de Ing. de Sistemas e Industrial Universidad Nacional de Colombia, Bogotá. Linear Classification Models. Fabio A. González Ph.D. Depto. de Ing. de Sistemas e Industrial Universidad Nacional de Colombia, Bogotá April 9, 2018 Content 1 2 3 4 Outline 1 2 3 4 problems { C 1, y(x) threshold predict(x) = C 2, y(x) < threshold, with threshold

More information

ADAPTIVE LATERAL INHIBITION FOR NON-NEGATIVE ICA. Mark Plumbley

ADAPTIVE LATERAL INHIBITION FOR NON-NEGATIVE ICA. Mark Plumbley Submitteed to the International Conference on Independent Component Analysis and Blind Signal Separation (ICA2) ADAPTIVE LATERAL INHIBITION FOR NON-NEGATIVE ICA Mark Plumbley Audio & Music Lab Department

More information

Worksheets for GCSE Mathematics. Quadratics. mr-mathematics.com Maths Resources for Teachers. Algebra

Worksheets for GCSE Mathematics. Quadratics. mr-mathematics.com Maths Resources for Teachers. Algebra Worksheets for GCSE Mathematics Quadratics mr-mathematics.com Maths Resources for Teachers Algebra Quadratics Worksheets Contents Differentiated Independent Learning Worksheets Solving x + bx + c by factorisation

More information

In this section, we review some basics of modeling via linear algebra: finding a line of best fit, Hebbian learning, pattern classification.

In this section, we review some basics of modeling via linear algebra: finding a line of best fit, Hebbian learning, pattern classification. Linear Models In this section, we review some basics of modeling via linear algebra: finding a line of best fit, Hebbian learning, pattern classification. Best Fitting Line In this section, we examine

More information

Effects of Interactive Function Forms and Refractoryperiod in a Self-Organized Critical Model Based on Neural Networks

Effects of Interactive Function Forms and Refractoryperiod in a Self-Organized Critical Model Based on Neural Networks Commun. Theor. Phys. (Beijing, China) 42 (2004) pp. 121 125 c International Academic Publishers Vol. 42, No. 1, July 15, 2004 Effects of Interactive Function Forms and Refractoryperiod in a Self-Organized

More information

Prof. Dr.-Ing. Armin Dekorsy Department of Communications Engineering. Stochastic Processes and Linear Algebra Recap Slides

Prof. Dr.-Ing. Armin Dekorsy Department of Communications Engineering. Stochastic Processes and Linear Algebra Recap Slides Prof. Dr.-Ing. Armin Dekorsy Department of Communications Engineering Stochastic Processes and Linear Algebra Recap Slides Stochastic processes and variables XX tt 0 = XX xx nn (tt) xx 2 (tt) XX tt XX

More information

Neural Networks. Hopfield Nets and Auto Associators Fall 2017

Neural Networks. Hopfield Nets and Auto Associators Fall 2017 Neural Networks Hopfield Nets and Auto Associators Fall 2017 1 Story so far Neural networks for computation All feedforward structures But what about.. 2 Loopy network Θ z = ቊ +1 if z > 0 1 if z 0 y i

More information

Graph-Laplacian PCA: Closed-form Solution and Robustness

Graph-Laplacian PCA: Closed-form Solution and Robustness 2013 IEEE Conference on Computer Vision and Pattern Recognition Graph-Laplacian PCA: Closed-form Solution and Robustness Bo Jiang a, Chris Ding b,a, Bin Luo a, Jin Tang a a School of Computer Science and

More information