CSE/NB 528 Final Lecture: All Good Things Must. CSE/NB 528: Final Lecture

Similar documents
Decoding. How well can we learn what the stimulus is by looking at the neural responses?

Basic elements of neuroelectronics -- membranes -- ion channels -- wiring. Elementary neuron models -- conductance based -- modelers alternatives

Basic elements of neuroelectronics -- membranes -- ion channels -- wiring

Encoding or decoding

Neuron. Detector Model. Understanding Neural Components in Detector Model. Detector vs. Computer. Detector. Neuron. output. axon

Signal detection theory

Consider the following spike trains from two different neurons N1 and N2:

Dendritic computation

3 Detector vs. Computer

Quantitative Electrophysiology

The Bayesian Brain. Robert Jacobs Department of Brain & Cognitive Sciences University of Rochester. May 11, 2017

Synaptic dynamics. John D. Murray. Synaptic currents. Simple model of the synaptic gating variable. First-order kinetics

Biological Modeling of Neural Networks

Introduction to Neural Networks

An Introductory Course in Computational Neuroscience

Chapter 9. Nerve Signals and Homeostasis

Biosciences in the 21st century

Neural networks: Unsupervised learning

What is the neural code? Sekuler lab, Brandeis

Nervous Tissue. Neurons Electrochemical Gradient Propagation & Transduction Neurotransmitters Temporal & Spatial Summation

Nervous Tissue. Neurons Neural communication Nervous Systems

Fast neural network simulations with population density methods

Introduction to Neural Networks U. Minn. Psy 5038 Spring, 1999 Daniel Kersten. Lecture 2a. The Neuron - overview of structure. From Anderson (1995)

Balance of Electric and Diffusion Forces

Exercise 15 : Cable Equation

Introduction and the Hodgkin-Huxley Model

A gradient descent rule for spiking neurons emitting multiple spikes

9 Generation of Action Potential Hodgkin-Huxley Model

Action Potentials and Synaptic Transmission Physics 171/271

Computational Explorations in Cognitive Neuroscience Chapter 2

Integration of synaptic inputs in dendritic trees

Control and Integration. Nervous System Organization: Bilateral Symmetric Animals. Nervous System Organization: Radial Symmetric Animals

Outline. NIP: Hebbian Learning. Overview. Types of Learning. Neural Information Processing. Amos Storkey

Neurophysiology. Danil Hammoudi.MD

Neural Coding: Integrate-and-Fire Models of Single and Multi-Neuron Responses

Math in systems neuroscience. Quan Wen

+ + ( + ) = Linear recurrent networks. Simpler, much more amenable to analytic treatment E.g. by choosing

How do synapses transform inputs?

Lecture 4: Feed Forward Neural Networks

Lecture 11 : Simple Neuron Models. Dr Eileen Nugent

Synaptic Input. Linear Model of Synaptic Transmission. Professor David Heeger. September 5, 2000

An Efficient Method for Computing Synaptic Conductances Based on a Kinetic Model of Receptor Binding

MEMBRANE POTENTIALS AND ACTION POTENTIALS:

Artificial Neural Networks The Introduction

Neurons, Synapses, and Signaling

Artifical Neural Networks

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann

Compartmental Modelling

Information processing. Divisions of nervous system. Neuron structure and function Synapse. Neurons, synapses, and signaling 11/3/2017

Learning Spatio-Temporally Encoded Pattern Transformations in Structured Spiking Neural Networks 12

Propagation& Integration: Passive electrical properties

Mathematical Neuroscience. Course: Dr. Conor Houghton 2010 Typeset: Cathal Ormond

Synaptic Plasticity. Introduction. Biophysics of Synaptic Plasticity. Functional Modes of Synaptic Plasticity. Activity-dependent synaptic plasticity:

9 Generation of Action Potential Hodgkin-Huxley Model

Linear Regression, Neural Networks, etc.

CSC411: Final Review. James Lucas & David Madras. December 3, 2018

Neurons, Synapses, and Signaling

CISC 3250 Systems Neuroscience

Neurons, Synapses, and Signaling

Structure and Measurement of the brain lecture notes

Nerve Signal Conduction. Resting Potential Action Potential Conduction of Action Potentials

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Administration. Registration Hw3 is out. Lecture Captioning (Extra-Credit) Scribing lectures. Questions. Due on Thursday 10/6

Exercises. Chapter 1. of τ approx that produces the most accurate estimate for this firing pattern.

Nervous Systems: Neuron Structure and Function

Neurons. The Molecular Basis of their Electrical Excitability

Chapter 48 Neurons, Synapses, and Signaling

Model of a Biological Neuron as a Temporal Neural Network

Emergence of resonances in neural systems: the interplay between adaptive threshold and short-term synaptic plasticity

Plasticity and Learning

How to do backpropagation in a brain

Synchrony in Neural Systems: a very brief, biased, basic view

Chapter 37 Active Reading Guide Neurons, Synapses, and Signaling

ECE 521. Lecture 11 (not on midterm material) 13 February K-means clustering, Dimensionality reduction

Hierarchy. Will Penny. 24th March Hierarchy. Will Penny. Linear Models. Convergence. Nonlinear Models. References

1/12/2017. Computational neuroscience. Neurotechnology.

Liquid Computing in a Simplified Model of Cortical Layer IV: Learning to Balance a Ball

Introduction Biologically Motivated Crude Model Backpropagation

Artificial Neural Networks. Q550: Models in Cognitive Science Lecture 5

Artificial Neural Network

Conductance-Based Integrate-and-Fire Models

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Hopfield Neural Network and Associative Memory. Typical Myelinated Vertebrate Motoneuron (Wikipedia) Topic 3 Polymers and Neurons Lecture 5

Neurons, Synapses, and Signaling

Modelling stochastic neural learning

Neurons and Nervous Systems

Artificial Neural Network and Fuzzy Logic

EEE 241: Linear Systems

BIOLOGY 11/10/2016. Neurons, Synapses, and Signaling. Concept 48.1: Neuron organization and structure reflect function in information transfer

Supporting Online Material for

Sampling-based probabilistic inference through neural and synaptic dynamics

Dynamical Constraints on Computing with Spike Timing in the Cortex

Lecture 2. Excitability and ionic transport

Coherence detection in a spiking neuron via Hebbian learning

NOTES: CH 48 Neurons, Synapses, and Signaling

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Neural Networks and Fuzzy Logic Rajendra Dept.of CSE ASCET

Data Mining Part 5. Prediction

Passive Membrane Properties

Membrane equation. VCl. dv dt + V = V Na G Na + V K G K + V Cl G Cl. G total. C m. G total = G Na + G K + G Cl

Transcription:

CSE/NB 528 Final Lecture: All Good Things Must 1

Course Summary Where have we been? Course Highlights Where do we go from here? Challenges and Open Problems Further Reading 2

What is the neural code? What is the nature of the code? Representing the spiking output: single cells vs populations rates vs spike times vs intervals What features of the stimulus does the neural system represent? 3

Encoding and decoding neural information Encoding: building functional models of neurons/neural systems and predicting the spiking output given the stimulus Decoding: what can we say about the stimulus given what we observe from the neuron or neural population? 4

Key concepts: Poisson & Gaussian Spike trains are variable Models are probabilistic Deviations are close to independent 5

Highlights: Neural Encoding spike-triggering stimulus features f 1 x 1 multidimensional decision function stimulus X(t) f 2 spiking output r(t) x 2 f 3 x 3 6

Highlights: Finding the feature space of a neural system Gaussian prior stimulus distribution covariance STA Spike-conditional distribution 7

Highlights: Finding an interesting tuning curve P(s) P(s spike) P(s) P(s spike) s s 8

Decoding: Signal detection theory z p(r -) p(r +) <r> - <r> + Decoding corresponds to comparing test to threshold. a(z) = P[ r z -] false alarm rate, size b(z) = P[ r z +] hit rate, power 9

Highlights: Neurometric curves 10

Decoding from a population e.g. cosine tuning curves RMS error in estimate Theunissen & Miller, 1991 11

More general approaches: MAP and ML MAP: s* which maximizes p[s r] ML: s* which maximizes p[r s] Difference is the role of the prior: differ by factor p[s]/p[r] For cercal data: 12

Highlights: Information maximization as a design principle of the nervous system 13

Encoding and decoding neural information Encoding: building functional models of neurons/neural systems and predicting the spiking output given the stimulus Decoding: what can we say about the stimulus given what we observe from the neuron or neural population? 14

The biophysical basis of neural computation 15

Excitability is due to the properties of ion channels Voltage dependent transmitter dependent (synaptic) Ca dependent 16

Highlights: The neural equivalent circuit Ohm s law: and Kirchhoff s law - Capacitive current Ionic currents Externally applied current 17

Simplified neural models A sequence of neural models of increasing complexity that approach the behavior of real neurons Integrate and fire neuron: subthreshold, like a passive membrane spiking is due to an imposed threshold at V T Spike response model: subthreshold, arbitrary kernel spiking is due to an imposed threshold at V T postspike, incorporates afterhyperpolarization Simple model: complete 2D dynamical system spiking threshold is intrinsic have to include a reset potential 18

Simplified models: integrate-and-fire V Integrate-and- Fire Model m dv dt ( V E ) If V > V threshold Spike Then reset: V = V reset L I e R m 19

Simplified models: spike response model Gerstner; Keat et al. 2001 20

Simplified models: spike response model 21

Simplified models: dynamical models 22

Highlights: Dendritic computation Filtering Shunting Delay lines Information segregation Synaptic scaling Direction selectivity 23

Highlights: Compartmental models Neuronal structure can be modeled using electrically coupled compartments Coupling conductances 24

Connecting neurons: Synapses Presynaptic voltage spikes cause neurotransmitter to cross the cleft, triggering postsynaptic receptors allowing ions to flow in, changing postsynaptic potential Glutamate: excitatory GABA A : inhibitory 25

Synaptic voltage changes Size of the PSP is a measure of synaptic strength. Can vary on the short term due to input history on the long term due to synaptic plasticity.. one way to build circuits that learn 26

Networks 27

28

Modeling Networks of Neurons dv v F( Wu Mv) dt Output Decay Input Feedback 29

Highlights: Unsupervised Learning For linear neuron: Basic Hebb Rule: T v w u dw w dt u uv T w Average effect over many inputs: dw dt uv Q is the input correlation matrix: Q w uu T Qw Hebb rule performs principal component analysis (PCA) w 30

Highlights: The Connection to Statistics Unsupervised learning = learning the hidden causes of input data Generative model Causes v Data u p[ v u; G] (posterior) Recognition model p[ u v; G] (data likelihood) Use EM algorithm for learning G = ( v, v ) Causes of clustered data Causes of natural images 31

Highlights: Generative Models Droning lecture Lack of sleep Mathematical derivations 32

Highlights: Supervised Learning Backpropagation for Multilayered Networks v m i g( j W g( ij k w jk u m k )) m u k m x j Goal: Find W and w that minimize errors: E( W ij ij, w ij jk ) 1 2 ij m, i ( d m i v m i Gradient descent learning rules: W W E W (Delta rule) ) 2 Desired output w jk w jk E w jk w jk E x m j x w m j jk (Chain rule) 33

Highlights: Reinforcement Learning Learning to predict rewards: w w ( r v) u Learning to predict delayed rewards (TD learning): (http://employees.csbsju.edu/tcreed/pb/pdoganim.html) w( ) w( ) [ r( t) v( t 1) v( t)] u( t ) Actor-Critic Learning: Critic learns value of each state using TD learning Actor learns best actions based on value of next state (using the TD error) 2.5 1 34

The Future: Challenges and Open Problems How do neurons encode information? Topics: Synchrony, Spike-timing based learning, Dynamic synapses Does a neuron s structure confer computational advantages? Topics: Role of channel dynamics, dendrites, plasticity in channels and their density How do networks implement computational principles such as efficient coding and Bayesian inference? How do networks learn optimal representations of their environment and engage in purposeful behavior? Topics: Unsupervised/reinforcement/imitation learning 35

Further Reading (for the summer and beyond) Spikes: Exploring the Neural Code, F. Rieke et al., MIT Press, 1997 The Biophysics of Computation, C. Koch, Oxford University Press, 1999 Large-Scale Neuronal Theories of the Brain, C. Koch and J. L. Davis, MIT Press, 1994 Probabilistic Models of the Brain, R. Rao et al., MIT Press, 2002 Bayesian Brain, K. Doya et al., MIT Press, 2007 Reinforcement Learning: An Introduction, R. Sutton and A. Barto, MIT Press, 1998 36

Next meeting: Project presentations! Project presentations will be on Thursday, June 9 (10:30am-12:20pm) in the same classroom Keep your presentation short: ~6-8 slides, 8 mins/group Slides: Bring your slides on a USB stick to use the class laptop (Apple) OR Bring your own laptop if you have videos etc. Projects reports (10-15 pages total) due June 9 (by email to both Adrienne and Raj before midnight) 37

Have a great summer! Au revoir! 38