How to make computers work like the brain
|
|
- Phillip Grant
- 5 years ago
- Views:
Transcription
1 How to make computers work like the brain (without really solving the brain) Dileep George
2 a single special machine can be made to do the work of all. It could in fact be made to work as a model of any other machine. Alan Turing 1937
3 Mathematics Databases Communications Video Imaging Audio Gaming - etc. - a single special machine can be made to do the work of all. It could in fact be made to work as a model of any other machine. Alan Turing 1937
4 a single special machine can be made to do the work of all. It could in fact be made to work as a model of any other machine. Alan Turing 1937 Mathematics Databases Communications Video Imaging Audio Gaming - etc. - Visual perception Auditory perception Somatosensory perception Languages Adaptive behavior Planning, thinking,
5 Intelligence paradigms
6 Intelligence paradigms Early AI (1940s to 1970s)
7 Intelligence paradigms Early AI (1940s to 1970s) Program Intelligence
8 Intelligence paradigms Early AI (1940s to 1970s) Program Intelligence Ignore what the brain does
9 Intelligence paradigms Early AI (1940s to 1970s) Program Intelligence Ignore what the brain does Airplanes were not made by...
10 Intelligence paradigms Early AI (1940s to 1970s) Program Intelligence Ignore what the brain does Airplanes were not made by... No learning
11 Intelligence paradigms Early AI (1940s to 1970s) Program Intelligence Ignore what the brain does Airplanes were not made by... No learning Commonsense problem
12 Intelligence Paradigms
13 Intelligence Paradigms Neural Networks (1980s and 1990s)
14 Intelligence Paradigms Neural Networks (1980s and 1990s) Back-propagation algorithmf or multi-layer networks
15 Intelligence Paradigms Neural Networks (1980s and 1990s) Back-propagation algorithmf or multi-layer networks Universal learning machines..can fit any function
16 Intelligence Paradigms Neural Networks (1980s and 1990s) Back-propagation algorithmf or multi-layer networks Universal learning machines..can fit any function The problem of scale
17 Story of chess..
18 Sample complexity of learning Face Non-face 256^1024
19 Sample complexity of learning Face Non-face 256^1024 P(Face) ^1024
20 256^1024=
21 256^1024=
22 256^1024= Number of seconds in a human s life:
23 Origins of Language.. Ramachandran and Hubbard, 2001
24 Bouba Kiki
25
26 Laminar structure
27 Common cortical algorithm: Supporting evidence Seeing in the Sound Zone," by Michael Merzenich, Nature, Vol. 404, April 20, 2000, pp "Induction of visual orientation modules in auditory cortex," by Jitendra Sharma, Alessandra Angelucci and Mriganka Sur, Nature, Vol. 404, April 20, 2000, pp
28 sensory substitution
29 sensory substitution A sensorimotor account of vision and visual consciousness, Kevin O Regan and Alva Noe. Photograph courtesy: P Bach-y-Rita
30 Neocortex Assume that: Neocortex uses the same algorithm to learn different modalities Neocortex learns efficiently
31 The No Free Lunch Theorem
32 The No Free Lunch Theorem No learning algorithm has an inherent superiority over other learning algorithms for all problems. (Wolpert, 1995)
33 The No Free Lunch Theorem No learning algorithm has an inherent superiority over other learning algorithms for all problems. (Wolpert, 1995) An algorithm s superiority comes from the assumptions that it makes about the problem
34 Neocortex Assume that: Neocortex uses the same algorithm to learn different modalities Neocortex learns efficiently
35 Neocortex Assume that: Neocortex uses the same algorithm to learn different modalities Neocortex learns efficiently => Data from different modalities must have the same underlying statistical structure
36 The right question to ask.. Are there assumptions we can make about the world that are 1. General enough to be applied to be a large number of problems 2. Specific enough to make learning possible??
37 High-level properties of the neocortex Hierarchical organization Spatial and temporal Using time as a supervisor Ability to make predictions Sparse Distributed Representations Feed-forward and feedback connections Sensori-motor
38 Abstraction levels Consciousness Cognitive models Perception Connections High-level anatomical structure Micro-circuit structure Neuron models Dendrites Learning rules Synapses Ion channels Proteins Glia
39 Neocortex Source of assumptions and constraints World s Data (Visual/Auditory etc...) To find correspondence with Neocortex properties Computational Principles (Machine Learning/Graphical Models etc..) To understand why brain does what it does in that particular way
40 If you are not computationally grounded, looking into biology is like looking into a religious text. You will always find what you are looking for.
41 Temporal learning example Suppose A-C-D and B-C-E are frequent in the data A C D B C E
42 Temporal learning example Suppose A-C-D and B-C-E are frequent in the data A C D B C E First-order model A C D B E
43 Temporal Models t t t-2 t-1 t-1 N^2, but can initialize from data to create a sparse model Not a scalable way to build higher order models
44 Variable order sequences t A B C D t-1 (kn)^2
45 State-splitting model First order model A C D B E
46 State-splitting model First order model A C D B E Model with replicated A C D states B C E
47 State-splitting model First order model A C D B E Model with replicated A C D states B C E Model after further A C D learning B C E
48 State-splitting model Learn first order model Replicate potential higher-order states Learn temporal statistics on the new model
49 State-splitting model Learn first order model Replicate potential higher-order states Learn temporal statistics on the new model
50 Start with replicated states Should be online Should still be sparse and efficient
51 t t+1 Which cell in this column do we connect to?
52
53 Connection to HMM c1 c2 c3 c4 c5
54 HMM and Forward-Backward Forward-backward algorithm On the restricted HMM, works very well in finding the higher-order states Can be made online by making reducing the lookahead One step look-ahead works quite well Zero look-ahead (pure Hebbian learning) doesn t work well.
55 t t+1 t+2 Which cell in this column do we connect to?
56 Biological implication A B C Can A know whether B was successful in firing C? The answer is YES
57
58
59 The idea of replicated states for learning higher order Neocortex Source of assumptions and constraints HMM model. Why one-step look-ahead is required. Predicted a computational property of neurons Nature of higher-order sequences World s Data (Visual/Auditory etc...) To find correspondence with Neocortex properties Computational Principles (Machine Learning/Graphical Models etc..) To understand why brain does what it does in that particular way
60 Recipe for building brain-like computers Accept the broad biological constraints Hierarchy, time as supervisor, sparse distributed representations, sensori-motor interactions... Have a biological circuit model for the learned system Put in biological details you can computationally justify Get to the biological circuit model using the most expedient learning algorithm Need not be biological. In fact, being biological can be a disadvantage. Work on a real problem
61 On to the paper...
62
63 Can we come to a set of mathematical equations that model cortical function?
64 Can we come to a set of mathematical equations that model cortical function? Something like Maxwell s equations for electromagnetic waves
65 Can we come to a set of mathematical equations that model cortical function? Something like Maxwell s equations for electromagnetic waves Assigns functions/equations for laminar circuits
66 Can we come to a set of mathematical equations that model cortical function? Something like Maxwell s equations for electromagnetic waves Assigns functions/equations for laminar circuits
67 Hierarchical Temporal Memory Mathematical Expression of Theory +
68 Hierarchical Temporal Memory Mathematical Expression of Theory Biological Data Constraints on connectivity, placement +
69 Hierarchical Temporal Memory Mathematical Expression of Theory Biological Data Constraints on connectivity, placement + Laminar Cortical Circuit Model
70
71
72
73 Markov chains (sequences) Coincidence patterns
74 Largel Spatial Scales/ Slow temporal scales. Small Spatial Scales/ Fast temporal scales.
75 Largel Spatial Scales/ Slow temporal scales. Small Spatial Scales/ Fast temporal scales.
76
77 1. Mathematical expression of HTM inference
78 1. Mathematical expression of HTM inference 2. Abstract neural implementation of equations
79 1. Mathematical expression of HTM inference 2. Abstract neural implementation of equations 3. Map the abstract neural implementation to laminar circuits using biological data
80 1. Mathematical expression of HTM inference 2. Abstract neural implementation of equations 3. Map the abstract neural implementation to laminar circuits using biological data
81
82
83 Markov chains (sequences) Coincidence patterns
84 2 Calculate likelihood of Markov chains 3 Calculate belief of coincidence patterns 1 Calculate likelihood of coincidence 4 Calculate probability of Markov chains
85 (1) Coincidence likelihood (2) Markov chain likelihood (3) Coincidence Belief (4) Feedback messages
86 y t (i) =P ( e t c i (t)) M j=1 λ m j t (r m j i )
87 α t (c i,g r )=P ( e t c i (t)) c j (t 1) C k P (c i (t) c j (t 1),g r )α t 1 (c j,g r ) D D D D D D D
88 2 Calculate likelihood of Markov chains 3 Calculate belief of coincidence patterns 1 Calculate likelihood of coincidence 4 Calculate probability of Markov chains
89 2 3 D D D D D D D D D D D D D D 1 4 Belief
90 1. Mathematical expression of HTM inference 2. Abstract neural implementation of equations 3. Map the abstract neural implementation to laminar circuits using biological data
91
92 Alex M. Thomson and A. Peter Bannister. Interlaminar connections in the neocortex. Cerebral cortex (New York, N.Y. : 1991), 13(1):5 14, Bannister 2003
93 D D D D D D D D D D D D D D Belief
94 D D D D D D D D D D D D D D Belief
95
96 Why Layer 4? Thalamo-cortical inputs go to layer 4
97
98
99
100
101
102 I II/III IV V VI
103
104
105 Lee and Mumford, J.Opt. Soc. America. July 2003
106 (a) (b)
107
108
109
110
111
112
113 Thank You Papers, thesis etc.
Hierarchy. Will Penny. 24th March Hierarchy. Will Penny. Linear Models. Convergence. Nonlinear Models. References
24th March 2011 Update Hierarchical Model Rao and Ballard (1999) presented a hierarchical model of visual cortex to show how classical and extra-classical Receptive Field (RF) effects could be explained
More informationLoopSOM: A Robust SOM Variant Using Self-Organizing Temporal Feedback Connections
LoopSOM: A Robust SOM Variant Using Self-Organizing Temporal Feedback Connections Rafael C. Pinto, Paulo M. Engel Instituto de Informática Universidade Federal do Rio Grande do Sul (UFRGS) P.O. Box 15.064
More informationSignal, donnée, information dans les circuits de nos cerveaux
NeuroSTIC Brest 5 octobre 2017 Signal, donnée, information dans les circuits de nos cerveaux Claude Berrou Signal, data, information: in the field of telecommunication, everything is clear It is much less
More informationWhat and where: A Bayesian inference theory of attention
What and where: A Bayesian inference theory of attention Sharat Chikkerur, Thomas Serre, Cheston Tan & Tomaso Poggio CBCL, McGovern Institute for Brain Research, MIT Preliminaries Outline Perception &
More informationÂngelo Cardoso 27 May, Symbolic and Sub-Symbolic Learning Course Instituto Superior Técnico
BIOLOGICALLY INSPIRED COMPUTER MODELS FOR VISUAL RECOGNITION Ângelo Cardoso 27 May, 2010 Symbolic and Sub-Symbolic Learning Course Instituto Superior Técnico Index Human Vision Retinal Ganglion Cells Simple
More informationArtificial Neural Networks. Historical description
Artificial Neural Networks Historical description Victor G. Lopez 1 / 23 Artificial Neural Networks (ANN) An artificial neural network is a computational model that attempts to emulate the functions of
More informationModelling stochastic neural learning
Modelling stochastic neural learning Computational Neuroscience András Telcs telcs.andras@wigner.mta.hu www.cs.bme.hu/~telcs http://pattern.wigner.mta.hu/participants/andras-telcs Compiled from lectures
More informationCS532, Winter 2010 Hidden Markov Models
CS532, Winter 2010 Hidden Markov Models Dr. Alan Fern, afern@eecs.oregonstate.edu March 8, 2010 1 Hidden Markov Models The world is dynamic and evolves over time. An intelligent agent in such a world needs
More informationSpiking Neural P Systems and Modularization of Complex Networks from Cortical Neural Network to Social Networks
Spiking Neural P Systems and Modularization of Complex Networks from Cortical Neural Network to Social Networks Adam Obtu lowicz Institute of Mathematics, Polish Academy of Sciences Śniadeckich 8, P.O.B.
More informationHow to do backpropagation in a brain
How to do backpropagation in a brain Geoffrey Hinton Canadian Institute for Advanced Research & University of Toronto & Google Inc. Prelude I will start with three slides explaining a popular type of deep
More informationCS 5522: Artificial Intelligence II
CS 5522: Artificial Intelligence II Hidden Markov Models Instructor: Alan Ritter Ohio State University [These slides were adapted from CS188 Intro to AI at UC Berkeley. All materials available at http://ai.berkeley.edu.]
More informationattention mechanisms and generative models
attention mechanisms and generative models Master's Deep Learning Sergey Nikolenko Harbour Space University, Barcelona, Spain November 20, 2017 attention in neural networks attention You re paying attention
More informationCOMP3702/7702 Artificial Intelligence Week1: Introduction Russell & Norvig ch.1-2.3, Hanna Kurniawati
COMP3702/7702 Artificial Intelligence Week1: Introduction Russell & Norvig ch.1-2.3, 3.1-3.3 Hanna Kurniawati Today } What is Artificial Intelligence? } Better know what it is first before committing the
More informationIntroduction to Deep Learning
Introduction to Deep Learning A. G. Schwing & S. Fidler University of Toronto, 2015 A. G. Schwing & S. Fidler (UofT) CSC420: Intro to Image Understanding 2015 1 / 39 Outline 1 Universality of Neural Networks
More informationWe Live in Exciting Times. CSCI-567: Machine Learning (Spring 2019) Outline. Outline. ACM (an international computing research society) has named
We Live in Exciting Times ACM (an international computing research society) has named CSCI-567: Machine Learning (Spring 2019) Prof. Victor Adamchik U of Southern California Apr. 2, 2019 Yoshua Bengio,
More informationEE04 804(B) Soft Computing Ver. 1.2 Class 2. Neural Networks - I Feb 23, Sasidharan Sreedharan
EE04 804(B) Soft Computing Ver. 1.2 Class 2. Neural Networks - I Feb 23, 2012 Sasidharan Sreedharan www.sasidharan.webs.com 3/1/2012 1 Syllabus Artificial Intelligence Systems- Neural Networks, fuzzy logic,
More informationArtificial Neural Network and Fuzzy Logic
Artificial Neural Network and Fuzzy Logic 1 Syllabus 2 Syllabus 3 Books 1. Artificial Neural Networks by B. Yagnanarayan, PHI - (Cover Topologies part of unit 1 and All part of Unit 2) 2. Neural Networks
More informationDeep learning in the visual cortex
Deep learning in the visual cortex Thomas Serre Brown University. Fundamentals of primate vision. Computational mechanisms of rapid recognition and feedforward processing. Beyond feedforward processing:
More informationDecoding conceptual representations
Decoding conceptual representations!!!! Marcel van Gerven! Computational Cognitive Neuroscience Lab (www.ccnlab.net) Artificial Intelligence Department Donders Centre for Cognition Donders Institute for
More informationFlexible Gating of Contextual Influences in Natural Vision. Odelia Schwartz University of Miami Oct 2015
Flexible Gating of Contextual Influences in Natural Vision Odelia Schwartz University of Miami Oct 05 Contextual influences Perceptual illusions: no man is an island.. Review paper on context: Schwartz,
More informationNeural Networks 1 Synchronization in Spiking Neural Networks
CS 790R Seminar Modeling & Simulation Neural Networks 1 Synchronization in Spiking Neural Networks René Doursat Department of Computer Science & Engineering University of Nevada, Reno Spring 2006 Synchronization
More informationThe Bayesian Brain. Robert Jacobs Department of Brain & Cognitive Sciences University of Rochester. May 11, 2017
The Bayesian Brain Robert Jacobs Department of Brain & Cognitive Sciences University of Rochester May 11, 2017 Bayesian Brain How do neurons represent the states of the world? How do neurons represent
More information15-381: Artificial Intelligence. Hidden Markov Models (HMMs)
15-381: Artificial Intelligence Hidden Markov Models (HMMs) What s wrong with Bayesian networks Bayesian networks are very useful for modeling joint distributions But they have their limitations: - Cannot
More informationBiosciences in the 21st century
Biosciences in the 21st century Lecture 1: Neurons, Synapses, and Signaling Dr. Michael Burger Outline: 1. Why neuroscience? 2. The neuron 3. Action potentials 4. Synapses 5. Organization of the nervous
More informationOn The Equivalence of Hierarchical Temporal Memory and Neural Nets
On The Equivalence of Hierarchical Temporal Memory and Neural Nets Bedeho Mesghina Wolde Mender December 7, 2009 Abstract In this paper we present a rigorous definition of classification in a common family
More informationFundamentals of Computational Neuroscience 2e
Fundamentals of Computational Neuroscience 2e January 1, 2010 Chapter 10: The cognitive brain Hierarchical maps and attentive vision A. Ventral visual pathway B. Layered cortical maps Receptive field size
More informationCS 5522: Artificial Intelligence II
CS 5522: Artificial Intelligence II Hidden Markov Models Instructor: Wei Xu Ohio State University [These slides were adapted from CS188 Intro to AI at UC Berkeley.] Pacman Sonar (P4) [Demo: Pacman Sonar
More informationDeep Learning Autoencoder Models
Deep Learning Autoencoder Models Davide Bacciu Dipartimento di Informatica Università di Pisa Intelligent Systems for Pattern Recognition (ISPR) Generative Models Wrap-up Deep Learning Module Lecture Generative
More information2015 Todd Neller. A.I.M.A. text figures 1995 Prentice Hall. Used by permission. Neural Networks. Todd W. Neller
2015 Todd Neller. A.I.M.A. text figures 1995 Prentice Hall. Used by permission. Neural Networks Todd W. Neller Machine Learning Learning is such an important part of what we consider "intelligence" that
More informationLiquid Computing. Wolfgang Maass. Institut für Grundlagen der Informationsverarbeitung Technische Universität Graz, Austria
NNB SS10 1 Liquid Computing Wolfgang Maass Institut für Grundlagen der Informationsverarbeitung Technische Universität Graz, Austria Institute for Theoretical Computer Science http://www.igi.tugraz.at/maass/
More informationLecture 4: Feed Forward Neural Networks
Lecture 4: Feed Forward Neural Networks Dr. Roman V Belavkin Middlesex University BIS4435 Biological neurons and the brain A Model of A Single Neuron Neurons as data-driven models Neural Networks Training
More informationFundamentals of Computational Neuroscience 2e
Fundamentals of Computational Neuroscience 2e Thomas Trappenberg March 21, 2009 Chapter 9: Modular networks, motor control, and reinforcement learning Mixture of experts Expert 1 Input Expert 2 Integration
More informationLast update: October 26, Neural networks. CMSC 421: Section Dana Nau
Last update: October 26, 207 Neural networks CMSC 42: Section 8.7 Dana Nau Outline Applications of neural networks Brains Neural network units Perceptrons Multilayer perceptrons 2 Example Applications
More informationSynchrony in Neural Systems: a very brief, biased, basic view
Synchrony in Neural Systems: a very brief, biased, basic view Tim Lewis UC Davis NIMBIOS Workshop on Synchrony April 11, 2011 components of neuronal networks neurons synapses connectivity cell type - intrinsic
More informationReview: Learning Bimodal Structures in Audio-Visual Data
Review: Learning Bimodal Structures in Audio-Visual Data CSE 704 : Readings in Joint Visual, Lingual and Physical Models and Inference Algorithms Suren Kumar Vision and Perceptual Machines Lab 106 Davis
More informationCISC 3250 Systems Neuroscience
CISC 3250 Systems Neuroscience Systems Neuroscience How the nervous system performs computations How groups of neurons work together to achieve intelligence Professor Daniel Leeds dleeds@fordham.edu JMH
More informationDaniel Mess Cornell University May 17,1999
~LWF0003 Daniel Mess Cornell University May 17,1999 http : / /www.people. cornell. edu/pages/ dah2 3/pole/pole. html 0- Abstract A neural network is useful for generating a solution to problems without
More informationSampling-based probabilistic inference through neural and synaptic dynamics
Sampling-based probabilistic inference through neural and synaptic dynamics Wolfgang Maass for Robert Legenstein Institute for Theoretical Computer Science Graz University of Technology, Austria Institute
More informationSean Escola. Center for Theoretical Neuroscience
Employing hidden Markov models of neural spike-trains toward the improved estimation of linear receptive fields and the decoding of multiple firing regimes Sean Escola Center for Theoretical Neuroscience
More informationFinancial Risk and Returns Prediction with Modular Networked Learning
arxiv:1806.05876v1 [cs.lg] 15 Jun 2018 Financial Risk and Returns Prediction with Modular Networked Learning Carlos Pedro Gonçalves June 18, 2018 University of Lisbon, Instituto Superior de Ciências Sociais
More informationDynamic Approaches: The Hidden Markov Model
Dynamic Approaches: The Hidden Markov Model Davide Bacciu Dipartimento di Informatica Università di Pisa bacciu@di.unipi.it Machine Learning: Neural Networks and Advanced Models (AA2) Inference as Message
More informationTuning tuning curves. So far: Receptive fields Representation of stimuli Population vectors. Today: Contrast enhancment, cortical processing
Tuning tuning curves So far: Receptive fields Representation of stimuli Population vectors Today: Contrast enhancment, cortical processing Firing frequency N 3 s max (N 1 ) = 40 o N4 N 1 N N 5 2 s max
More informationCSCI 5582 Artificial Intelligence. Today 9/28. Knowledge Representation. Lecture 9
CSCI 5582 Artificial Intelligence Lecture 9 Jim Martin Today 9/28 Review propositional logic Reasoning with Models Break More reasoning Knowledge Representation A knowledge representation is a formal scheme
More informationBayesian Hidden Markov Models and Extensions
Bayesian Hidden Markov Models and Extensions Zoubin Ghahramani Department of Engineering University of Cambridge joint work with Matt Beal, Jurgen van Gael, Yunus Saatci, Tom Stepleton, Yee Whye Teh Modeling
More informationHow Behavioral Constraints May Determine Optimal Sensory Representations
How Behavioral Constraints May Determine Optimal Sensory Representations by Salinas (2006) CPSC 644 Presented by Yoonsuck Choe Motivation Neural response is typically characterized in terms of a tuning
More informationCOGNITIVE INFORMATION DYNAMICS:
INC talk COGNITIVE INFORMATION DYNAMICS: INFORMATION FLOWS INSTABILITIES AND COORDINATION Life is like giving a concert on the violin while learning to play the instrument. That, friends, is real wisdom.
More informationTHE MAN WHO MISTOOK HIS COMPUTER FOR A HAND
THE MAN WHO MISTOOK HIS COMPUTER FOR A HAND Neural Control of Robotic Devices ELIE BIENENSTOCK, MICHAEL J. BLACK, JOHN DONOGHUE, YUN GAO, MIJAIL SERRUYA BROWN UNIVERSITY Applied Mathematics Computer Science
More informationChapter 1. Introduction
Chapter 1 Introduction Symbolical artificial intelligence is a field of computer science that is highly related to quantum computation. At first glance, this statement appears to be a contradiction. However,
More informationProbabilistic Models in Theoretical Neuroscience
Probabilistic Models in Theoretical Neuroscience visible unit Boltzmann machine semi-restricted Boltzmann machine restricted Boltzmann machine hidden unit Neural models of probabilistic sampling: introduction
More informationIntroduction to Deep Learning
Introduction to Deep Learning A. G. Schwing & S. Fidler University of Toronto, 2014 A. G. Schwing & S. Fidler (UofT) CSC420: Intro to Image Understanding 2014 1 / 35 Outline 1 Universality of Neural Networks
More informationIntelligent Systems and Control Prof. Laxmidhar Behera Indian Institute of Technology, Kanpur
Intelligent Systems and Control Prof. Laxmidhar Behera Indian Institute of Technology, Kanpur Introduction I am Laxmidhar Behera, in Department of Electrical Engineering, IIT, Kanpur. I will be teaching
More informationA Model for Real-Time Computation in Generic Neural Microcircuits
A Model for Real-Time Computation in Generic Neural Microcircuits Wolfgang Maass, Thomas Natschläger Institute for Theoretical Computer Science Technische Universitaet Graz A-81 Graz, Austria maass, tnatschl
More informationRecurrent Neural Networks
Recurrent Neural Networks Datamining Seminar Kaspar Märtens Karl-Oskar Masing Today's Topics Modeling sequences: a brief overview Training RNNs with back propagation A toy example of training an RNN Why
More informationKnowledge Extraction from DBNs for Images
Knowledge Extraction from DBNs for Images Son N. Tran and Artur d Avila Garcez Department of Computer Science City University London Contents 1 Introduction 2 Knowledge Extraction from DBNs 3 Experimental
More informationAn Introductory Course in Computational Neuroscience
An Introductory Course in Computational Neuroscience Contents Series Foreword Acknowledgments Preface 1 Preliminary Material 1.1. Introduction 1.1.1 The Cell, the Circuit, and the Brain 1.1.2 Physics of
More informationCognitive Prosem. December 1, 2008
Cognitive Prosem December 1, 2008 Spatial Representation What does it mean to represent spatial information? What are the characteristics of human spatial representations? Reference Frames/Systems Distortions
More informationNeural networks (not in book)
(not in book) Another approach to classification is neural networks. were developed in the 1980s as a way to model how learning occurs in the brain. There was therefore wide interest in neural networks
More informationDynamic Causal Modelling for EEG/MEG: principles J. Daunizeau
Dynamic Causal Modelling for EEG/MEG: principles J. Daunizeau Motivation, Brain and Behaviour group, ICM, Paris, France Overview 1 DCM: introduction 2 Dynamical systems theory 3 Neural states dynamics
More informationInformation in Biology
Information in Biology CRI - Centre de Recherches Interdisciplinaires, Paris May 2012 Information processing is an essential part of Life. Thinking about it in quantitative terms may is useful. 1 Living
More informationNonlinear reverse-correlation with synthesized naturalistic noise
Cognitive Science Online, Vol1, pp1 7, 2003 http://cogsci-onlineucsdedu Nonlinear reverse-correlation with synthesized naturalistic noise Hsin-Hao Yu Department of Cognitive Science University of California
More informationHidden Markov Models, I. Examples. Steven R. Dunbar. Toy Models. Standard Mathematical Models. Realistic Hidden Markov Models.
, I. Toy Markov, I. February 17, 2017 1 / 39 Outline, I. Toy Markov 1 Toy 2 3 Markov 2 / 39 , I. Toy Markov A good stack of examples, as large as possible, is indispensable for a thorough understanding
More informationBackpropagation and Neural Networks part 1. Lecture 4-1
Lecture 4: Backpropagation and Neural Networks part 1 Lecture 4-1 Administrative A1 is due Jan 20 (Wednesday). ~150 hours left Warning: Jan 18 (Monday) is Holiday (no class/office hours) Also note: Lectures
More informationSTRUCTURAL BIOINFORMATICS I. Fall 2015
STRUCTURAL BIOINFORMATICS I Fall 2015 Info Course Number - Classification: Biology 5411 Class Schedule: Monday 5:30-7:50 PM, SERC Room 456 (4 th floor) Instructors: Vincenzo Carnevale - SERC, Room 704C;
More informationArtificial Intelligence
Artificial Intelligence Jeff Clune Assistant Professor Evolving Artificial Intelligence Laboratory Announcements Be making progress on your projects! Three Types of Learning Unsupervised Supervised Reinforcement
More informationCMSC 421: Neural Computation. Applications of Neural Networks
CMSC 42: Neural Computation definition synonyms neural networks artificial neural networks neural modeling connectionist models parallel distributed processing AI perspective Applications of Neural Networks
More informationA Three-dimensional Physiologically Realistic Model of the Retina
A Three-dimensional Physiologically Realistic Model of the Retina Michael Tadross, Cameron Whitehouse, Melissa Hornstein, Vicky Eng and Evangelia Micheli-Tzanakou Department of Biomedical Engineering 617
More informationNeural Networks 2. 2 Receptive fields and dealing with image inputs
CS 446 Machine Learning Fall 2016 Oct 04, 2016 Neural Networks 2 Professor: Dan Roth Scribe: C. Cheng, C. Cervantes Overview Convolutional Neural Networks Recurrent Neural Networks 1 Introduction There
More informationCourse 395: Machine Learning
Course 395: Machine Learning Lecturers: Maja Pantic (maja@doc.ic.ac.uk) Stavros Petridis (sp104@doc.ic.ac.uk) Goal (Lectures): To present basic theoretical concepts and key algorithms that form the core
More informationUsing Variable Threshold to Increase Capacity in a Feedback Neural Network
Using Variable Threshold to Increase Capacity in a Feedback Neural Network Praveen Kuruvada Abstract: The article presents new results on the use of variable thresholds to increase the capacity of a feedback
More informationLearning from Sequential and Time-Series Data
Learning from Sequential and Time-Series Data Sridhar Mahadevan mahadeva@cs.umass.edu University of Massachusetts Sridhar Mahadevan: CMPSCI 689 p. 1/? Sequential and Time-Series Data Many real-world applications
More informationShadows of the Mind. A Search for the Missing Science of Consciousness ROGER PENROSE. Rouse Ball Professor of Mathematics University of Oxford
Shadows of the Mind A Search for the Missing Science of Consciousness ROGER PENROSE Rouse Ball Professor of Mathematics University of Oxford Oxford New York Melbourne OXFORD UNIVERSITY PRESS 1994 CONTENTS
More informationCompetitive Learning for Deep Temporal Networks
Competitive Learning for Deep Temporal Networks Robert Gens Computer Science and Engineering University of Washington Seattle, WA 98195 rcg@cs.washington.edu Pedro Domingos Computer Science and Engineering
More informationBayesian probability theory and generative models
Bayesian probability theory and generative models Bruno A. Olshausen November 8, 2006 Abstract Bayesian probability theory provides a mathematical framework for peforming inference, or reasoning, using
More informationPei Wang( 王培 ) Temple University, Philadelphia, USA
Pei Wang( 王培 ) Temple University, Philadelphia, USA Artificial General Intelligence (AGI): a small research community in AI that believes Intelligence is a general-purpose capability Intelligence should
More informationLearning from Data. Amos Storkey, School of Informatics. Semester 1. amos/lfd/
Semester 1 http://www.anc.ed.ac.uk/ amos/lfd/ Introduction Welcome Administration Online notes Books: See website Assignments Tutorials Exams Acknowledgement: I would like to that David Barber and Chris
More informationChapter 3 BIOLOGY AND BEHAVIOR
Chapter 3 BIOLOGY AND BEHAVIOR Section 1: The Nervous System Section 2: The Brain: Our Control Center Section 3: The Endocrine System Section 4: Heredity: Our Genetic Background 1 Section 1: The Nervous
More informationNeuroinformatics. Giorgio A. Ascoli. Erik De Schutter David N. Kennedy IN THIS ISSUE. Editors. Medline/Pubmed/Index Medicus Science Citation Index
ISSN 1539 2791 Volume 2 Number 3 2004 Neuroinformatics Editors IN THIS ISSUE Giorgio A. Ascoli Scaling Self-Organizing Maps to Model Large Cortical Networks Erik De Schutter David N. Kennedy Classification
More informationStatistical model of evolution of brain parcellation. Daniel D. Ferrante, Yi Wei, and Alexei A. Koulakov
1 Statistical model of evolution of brain parcellation Daniel D. Ferrante, Yi Wei, and Alexei A. Koulakov Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, 11724. USA. We study the distribution of
More informationCOMP304 Introduction to Neural Networks based on slides by:
COMP34 Introduction to Neural Networks based on slides by: Christian Borgelt http://www.borgelt.net/ Christian Borgelt Introduction to Neural Networks Motivation: Why (Artificial) Neural Networks? (Neuro-)Biology
More informationInformation in Biology
Lecture 3: Information in Biology Tsvi Tlusty, tsvi@unist.ac.kr Living information is carried by molecular channels Living systems I. Self-replicating information processors Environment II. III. Evolve
More informationIntroduction to Neural Networks
Introduction to Neural Networks What are (Artificial) Neural Networks? Models of the brain and nervous system Highly parallel Process information much more like the brain than a serial computer Learning
More informationRapid evolution of the cerebellum in humans and other great apes
Rapid evolution of the cerebellum in humans and other great apes Article Accepted Version Barton, R. A. and Venditti, C. (2014) Rapid evolution of the cerebellum in humans and other great apes. Current
More informationQuantitative neuroanatomy as a tool to understand cortical function. Part 1: methodological aspects. Almut Schüz
Quantitative neuroanatomy as a tool to understand cortical function Part 1: methodological aspects Almut Schüz Max-Planck-Institut für Biologische Kybernetik, Tübingen Cortico-cortical long-range connectivity:
More informationSynchronization in Spiking Neural Networks
CS 790R Seminar Modeling & Simulation Synchronization in Spiking Neural Networks ~ Lecture 8 ~ René Doursat Department of Computer Science & Engineering University of Nevada, Reno Spring 2005 Synchronization
More informationIntegrated Commonsense Reasoning and Learning Using Non-axiomatic Logic. Pei Wang Temple University Philadelphia, USA
Integrated Commonsense Reasoning and Learning Using Non-axiomatic Logic Pei Wang Temple University Philadelphia, USA Intelligence as a Whole Mainstream AI treats Intelligence as a collection of problem-specific
More informationLeo Kadanoff and 2d XY Models with Symmetry-Breaking Fields. renormalization group study of higher order gradients, cosines and vortices
Leo Kadanoff and d XY Models with Symmetry-Breaking Fields renormalization group study of higher order gradients, cosines and vortices Leo Kadanoff and Random Matrix Theory Non-Hermitian Localization in
More informationComputational Cognitive Science
Computational Cognitive Science Lecture 9: A Bayesian model of concept learning Chris Lucas School of Informatics University of Edinburgh October 16, 218 Reading Rules and Similarity in Concept Learning
More information10/17/04. Today s Main Points
Part-of-speech Tagging & Hidden Markov Model Intro Lecture #10 Introduction to Natural Language Processing CMPSCI 585, Fall 2004 University of Massachusetts Amherst Andrew McCallum Today s Main Points
More informationIn the Name of God. Lecture 9: ANN Architectures
In the Name of God Lecture 9: ANN Architectures Biological Neuron Organization of Levels in Brains Central Nervous sys Interregional circuits Local circuits Neurons Dendrite tree map into cerebral cortex,
More informationOptimal In-Place Self-Organization for Cortical Development: Limited Cells, Sparse Coding and Cortical Topography
Optimal In-Place Self-Organization for Cortical Development: Limited Cells, Sparse Coding and Cortical Topography Juyang Weng and Matthew D. Luciw Department of Computer Science and Engineering Michigan
More informationNeural Networks. Chapter 18, Section 7. TB Artificial Intelligence. Slides from AIMA 1/ 21
Neural Networks Chapter 8, Section 7 TB Artificial Intelligence Slides from AIMA http://aima.cs.berkeley.edu / 2 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural
More informationof the dynamics. There is a competition between the capacity of the network and the stability of the
Special Issue on the Role and Control of Random Events in Biological Systems c World Scientic Publishing Company LEARNING SYNFIRE CHAINS: TURNING NOISE INTO SIGNAL JOHN HERTZ and ADAM PRUGEL-BENNETT y
More informationCS 343: Artificial Intelligence
CS 343: Artificial Intelligence Hidden Markov Models Prof. Scott Niekum The University of Texas at Austin [These slides based on those of Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley.
More informationCS 354R: Computer Game Technology
CS 354R: Computer Game Technology AI Fuzzy Logic and Neural Nets Fall 2017 Fuzzy Logic Philosophical approach Decisions based on degree of truth Is not a method for reasoning under uncertainty that s probability
More informationKrubitzer & Kaas S1 S2
Krubitzer & Kaas S1 S2 V1 V2 A1 MT 30 μm pia white matter Somato- Mean of Motor sensory Frontal Temporal Parietal Visual means Mouse 109.2 ± 6.7 111.9 ±6.9 110.8 ±7.1 110.5 ±6.5 104.7 ±7.2 112.2 ±6.0 109.9
More informationVariational Autoencoders. Presented by Alex Beatson Materials from Yann LeCun, Jaan Altosaar, Shakir Mohamed
Variational Autoencoders Presented by Alex Beatson Materials from Yann LeCun, Jaan Altosaar, Shakir Mohamed Contents 1. Why unsupervised learning, and why generative models? (Selected slides from Yann
More informationProcessing of Time Series by Neural Circuits with Biologically Realistic Synaptic Dynamics
Processing of Time Series by Neural Circuits with iologically Realistic Synaptic Dynamics Thomas Natschläger & Wolfgang Maass Institute for Theoretical Computer Science Technische Universität Graz, ustria
More informationCognitive Systems 300: Probability and Causality (cont.)
Cognitive Systems 300: Probability and Causality (cont.) David Poole and Peter Danielson University of British Columbia Fall 2013 1 David Poole and Peter Danielson Cognitive Systems 300: Probability and
More informationTilt-aftereffect and adaptation of V1 neurons
Tilt-aftereffect and adaptation of V1 neurons Dezhe Jin Department of Physics The Pennsylvania State University Outline The tilt aftereffect (TAE) Classical model of neural basis of TAE Neural data on
More informationIntro and Homeostasis
Intro and Homeostasis Physiology - how the body works. Homeostasis - staying the same. Functional Types of Neurons Sensory (afferent - coming in) neurons: Detects the changes in the body. Informations
More information