Brains and Computation

Similar documents
Sections 18.6 and 18.7 Artificial Neural Networks

Marr's Theory of the Hippocampus: Part I

Sections 18.6 and 18.7 Artificial Neural Networks

Sections 18.6 and 18.7 Analysis of Artificial Neural Networks

We saw last time how the development of accurate clocks in the 18 th and 19 th centuries transformed human cultures over the world.

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

COMP3702/7702 Artificial Intelligence Week1: Introduction Russell & Norvig ch.1-2.3, Hanna Kurniawati

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

EQ: How do I convert between standard form and scientific notation?

Artificial Neural Networks. Q550: Models in Cognitive Science Lecture 5

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Biological Modeling of Neural Networks:

Linear Regression, Neural Networks, etc.

CS 126 Lecture T6: NP-Completeness

Outline. NIP: Hebbian Learning. Overview. Types of Learning. Neural Information Processing. Amos Storkey

Modelling stochastic neural learning

Tuning tuning curves. So far: Receptive fields Representation of stimuli Population vectors. Today: Contrast enhancment, cortical processing

Variational Autoencoders. Presented by Alex Beatson Materials from Yann LeCun, Jaan Altosaar, Shakir Mohamed

Introduction to Algebra: The First Week

Introduction Principles of Signaling and Organization p. 3 Signaling in Simple Neuronal Circuits p. 4 Organization of the Retina p.

Experiments on the Consciousness Prior

CSE/NB 528 Final Lecture: All Good Things Must. CSE/NB 528: Final Lecture

Introduction to Neural Networks

Cognitive Prosem. December 1, 2008

How to do backpropagation in a brain

Fundamentals of Computational Neuroscience 2e

Intro and Homeostasis

Machine Learning. Neural Networks

Plasticity and Learning

Data Mining Part 5. Prediction

Effects of Interactive Function Forms in a Self-Organized Critical Model Based on Neural Networks

The Bayesian Brain. Robert Jacobs Department of Brain & Cognitive Sciences University of Rochester. May 11, 2017

Hierarchy. Will Penny. 24th March Hierarchy. Will Penny. Linear Models. Convergence. Nonlinear Models. References

Introduction Biologically Motivated Crude Model Backpropagation

Artificial Intelligence

Limits of Computation

SGD and Deep Learning

Two Lectures on Physics

to appear in Frank Eeckman, ed., Proceedings of the Computational Neurosciences Conference CNS*93, Kluwer Academic Press.

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann

Simple Interactions CS 105 Lecture 2 Jan 26. Matthew Stone

Newton s Laws Review

Computational Neuroscience. Structure Dynamics Implementation Algorithm Computation - Function

Neural Networks 1 Synchronization in Spiking Neural Networks

CS 124 Math Review Section January 29, 2018

Biosciences in the 21st century

Supervisor: Prof. Stefano Spaccapietra Dr. Fabio Porto Student: Yuanjian Wang Zufferey. EPFL - Computer Science - LBD 1

Deep learning in the brain. Deep learning summer school Montreal 2017

PHYSICS 107. Lecture 3 Numbers and Units

Network Computing and State Space Semantics versus Symbol Manipulation and Sentential Representation

Lecture 8: Introduction to Deep Learning: Part 2 (More on backpropagation, and ConvNets)

Lecture 4: Feed Forward Neural Networks

Viewpoint invariant face recognition using independent component analysis and attractor networks

Computational Models of Human Cognition

Home About Edge Features Edge Editions Press Edge Search Subscribe

Neural Networks. Chapter 18, Section 7. TB Artificial Intelligence. Slides from AIMA 1/ 21

Neural Network to Control Output of Hidden Node According to Input Patterns

Synaptic Plasticity. Introduction. Biophysics of Synaptic Plasticity. Functional Modes of Synaptic Plasticity. Activity-dependent synaptic plasticity:

k k k 1 Lecture 9: Applying Backpropagation Lecture 9: Applying Backpropagation 3 Lecture 9: Applying Backpropagation

May the force be with you

Sparse Coding as a Generative Model

MITOCW 6. Standing Waves Part I

Clickers Registration Roll Call

2015 Todd Neller. A.I.M.A. text figures 1995 Prentice Hall. Used by permission. Neural Networks. Todd W. Neller

Nervous Tissue. Neurons Electrochemical Gradient Propagation & Transduction Neurotransmitters Temporal & Spatial Summation

Theory of Self-Reproducing Automata John Von Neumann. Joshua Ridens and David Shubsda

Today. Clickers Registration Roll Call. Announcements: Loose ends from lecture 2 Law of Inertia (Newton s 1st Law) What is Force?

Concepts in Physics. Friday, September 11th 2009

Imprisoned For the Truth

Applied Virtual Intelligence in Oil & Gas Industry; Past, Present, & Future

How to do backpropagation in a brain. Geoffrey Hinton Canadian Institute for Advanced Research & University of Toronto

How to build a brain. Cognitive Modelling. Terry & Chris Centre for Theoretical Neuroscience University of Waterloo

Human Visual System Neural Network

MITOCW MITRES18_005S10_DiffEqnsGrowth_300k_512kb-mp4

Neurochemistry 1. Nervous system is made of neurons & glia, as well as other cells. Santiago Ramon y Cajal Nobel Prize 1906

MITOCW big_picture_derivatives_512kb-mp4

PHYSICS 107. Lecture 8 Conservation Laws. For every action there is an equal and opposite reaction.

Introduction. Chapter What is this book about?

Hestenes lectures, Part 5. Summer 1997 at ASU to 50 teachers in their 3 rd Modeling Workshop

Recurrent Neural Networks

Neural variability and Poisson statistics

Collecting Aligned Activity & Connectomic Data Example: Mouse Vibrissal Touch Barrel Cortex Exploiting Coherence to Reduce Dimensionality Example: C.

How do biological neurons learn? Insights from computational modelling of

Machine Learning with Quantum-Inspired Tensor Networks

Spatial Representations in the Parietal Cortex May Use Basis Functions

Welcome to Physics 40!

The BrainGate Neural Interface System is an investigational medical device that is being developed to improve the quality of life for physically

Processing of Time Series by Neural Circuits with Biologically Realistic Synaptic Dynamics

Welcome to Physics 43!

UNIT 1 MECHANICS PHYS:1200 LECTURE 2 MECHANICS (1)

Human Brain Networks. Aivoaakkoset BECS-C3001"

Lecture 2: Quantum Mechanics & Uncertainty. Today s song: I Know But I Don t Know- Blondie

Sampling-based probabilistic inference through neural and synaptic dynamics

Math in systems neuroscience. Quan Wen

Neuroinformatics. Marcus Kaiser. Week 10: Cortical maps and competitive population coding (textbook chapter 7)!

Artificial Neural Network and Fuzzy Logic

Special Theory of Relativity. A Brief introduction

Replay argument. Abstract. Tanasije Gjorgoski Posted on on 03 April 2006

Based on the original slides of Hung-yi Lee

Introduction To Artificial Neural Networks

Transcription:

15-883: Computational Models of Neural Systems Lecture 1.1: Brains and Computation David S. Touretzky Computer Science Department Carnegie Mellon University 1

Models of the Nervous System Hydraulic network (Descartes): nerves = hoses that carry fluid to drive the muscles Clockwork: systematic and representational Telephone switchboard: communication Digital computer ( electronic brain ): computational Metaphors can serve as informal theories. Help to frame the discussion. But limited in predictive power. 2

Why Do Modeling? Models help to organize and concisely express our thoughts about the system being modeled. Good models make testable predictions, which can help guide experiments. Sometimes a computational model must be implemented in a computer simulation in order to explore and fully understand its behavior. Surprising behavior may lead to new theories. 3

Computers Made From Meat The essential claim is this: Brains perform computation. Brains are also organs (i.e., metabolic systems) and mechanical structures (aqueducts, fiber tracts, etc.) But they also perform computation. Therefore: Computational theories can give insight into brain function. 4

Can A Physical System Perform Computation? It's a subjective judgment. What to look for: 1) Its physical states correspond to the representations of some abstract computational system. 2) Transitions between its states can be explained in terms of operations on those representations. Terry Sejnowski and Patricia Churchland, authors of The Computational Brain 5

Physical Computation: The Slide Rule Abstract function being computed: multiplication Input: a pair of numbers Output: a number Physical Realization: First input = point on surface of the (fixed) D scale Second input = point on surface of the (sliding) C scale Output = point on surface of the (fixed) D scale 6

Slide Rule Computation: Multiply 2.05 by 3 Move the sliding C scale so that the digit 1 is at 2.05 on the D scale. C D Slide the cursor so that the red index is over the 3 on the C scale. Read the result 6.15 on the D scale. C D Why does this work? Multiplication = adding logs. 7

Tinkerytoy Tic-Tac-Toe Computer Designed by Danny Hillis at MIT. See Scientific American article for details. 8

Do Brains Compute? Most scholars believe the answer is yes. Brains are meat computers! Some consider this conclusion demeaning. Computers are machines. I am not a machine! Some try to find reasons the answer could be no. Example: if unpredictable quantum effects played a crucial role in what brains do, then the result would not be describable as a computable function. 9

How Big Are Meat Computers? Some Numbers Neurons Synapses Humans 1012 1015 Rats 1010 1013 1 mm3 of cortex 105 109 A cortical neuron averages 4.12 103 synapses (cat or monkey.) 10

Demystifying the Brain (Cherniak, 1990) There are roughly 1013 synapses in cortex. Assume each stores one bit of information. That's 1.25 terabytes. The Library of Congress (80 million volumes, average 300 typed pages each) contains about 48 terabytes of data. The brain is complex, but not infinitely so. The cerebellum, concerned with posture and movement (and...?), contains four times as many neurons as the cortex, seat of language and conscious reasoning. 11

Computational Resources Illustration from Wired Magazine, May 2013. 12

Computational Processes Posited in the Brain Table lookup / associative memory. Competitive learning; self-organizing maps. Principal components analysis. Gradient descent error minimization learning. Temporal difference learning. Dynamical systems (attractor networks, parallel constraint satisfaction). This course will explore these models and how they apply to various brain structures: hippocampus, basal ganglia, cerebellum, cortex, etc. 13

Want to Build a Brain? Some Bad News: We're still in the early days of neural computation. Our theories of brain function are vague and wrong. 14

Building A Brain IBM's Dharmendra Modha EPFL's Henry Markram 15

Science vs. Engineering Science: figure out how nature works. Good models are as simple as possible. Models should reflect reality. Models should be falsifiable (make predictions). Engineering: figure out how to make useful stuff. Good means performs a task faster/cheaper/more reliably. Making a system more like the brain doesn't in itself make it better. Holy grail for CS/AI people: use insights from neuroscience to solve engineering problems in perception, control, inference, etc. Hard, because we don't know how brains work yet. 16

Do We Have All the Math We Need to Understand the Brain? Probably not yet. People have tried all kinds of things: Chaos theory Dynamical systems theory Particle filters Artificial neural networks (many flavors) Quantum mechanics We can explain simple neural reflexes, but not memory or cognition. Current theories will probably turn out to be as wrong as Aristotelian physics. 17

Which Rock Hits the Ground First? Natural motion is downward Aristotle (384-322 BCE) 18

Aristotelian Motion 19

Galileo: Motion is Parabolic and Independent of Mass Galileo Galilei (1564-1642) 20

Why a Parabola? Need Calculus a t = 9.8 m/ s 2 v t = a t dt = 9.8 t v 0 h t = v t dt = 9.8 t 2 /2 v 0 t h 0 Isaac Newton (1643-1727) 21

Relativistic Motion: Curved Spacetime For this theory you need tensor calculus. Albert Einstein (1879-1955) 22

The Misunderstood Brain We know a lot about what makes neurons fire. We know a good deal about wiring patterns. We know only a little about how information is represented in neural tissue. Where are the noun phrase cells in the brain? We know almost nothing about how information is processed. This course explores what we do know. There is progress every month. It's an exciting time to be a computational neuroscientist. 23

Some Representative Successes Dopamine cells fire in response to rewards, but also in response to neutral stimuli that have become associated with rewards. But they can also stop firing with further training, or they can pause when a reward is missed. Why should they do that? Temporal difference learning, a type of reinforcement learning, neatly explains much of the data. Most cells in primary visual cortex get input from both eyes but have a dominant eye that they respond more to. Staining shows zebra-like ocular dominance stripes. How does this structure emerge? Competitive learning algorithms, a type of unsupervised learning, can account for the formation of ocular dominance and orientation selectivity in V1. 24