Computational Models of Human Cognition

Similar documents
Neural Networks. Chapter 18, Section 7. TB Artificial Intelligence. Slides from AIMA 1/ 21

Lecture 4: Perceptrons and Multilayer Perceptrons

Artificial Neural Network

CMSC 421: Neural Computation. Applications of Neural Networks

Artificial Intelligence

Data Mining Part 5. Prediction

Last update: October 26, Neural networks. CMSC 421: Section Dana Nau

Neural networks. Chapter 20, Section 5 1

EEE 241: Linear Systems

Neural networks. Chapter 19, Sections 1 5 1

Machine Learning. Neural Networks

AI Programming CS F-20 Neural Networks

Artifical Neural Networks

Feedforward Neural Nets and Backpropagation

Lecture 7 Artificial neural networks: Supervised learning

Neural Networks and the Back-propagation Algorithm

Artificial Neural Networks. Q550: Models in Cognitive Science Lecture 5

ECE521 Lectures 9 Fully Connected Neural Networks

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

CS:4420 Artificial Intelligence

Multilayer Perceptron

CS 6501: Deep Learning for Computer Graphics. Basics of Neural Networks. Connelly Barnes

Multilayer Perceptron Tutorial

2015 Todd Neller. A.I.M.A. text figures 1995 Prentice Hall. Used by permission. Neural Networks. Todd W. Neller

ARTIFICIAL INTELLIGENCE. Artificial Neural Networks

Neural Networks 1 Synchronization in Spiking Neural Networks

Machine Learning for Large-Scale Data Analysis and Decision Making A. Neural Networks Week #6

DEEP LEARNING AND NEURAL NETWORKS: BACKGROUND AND HISTORY

COMP304 Introduction to Neural Networks based on slides by:

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann

Artificial Neural Networks Examination, June 2005

COGS Q250 Fall Homework 7: Learning in Neural Networks Due: 9:00am, Friday 2nd November.

Neural networks. Chapter 20. Chapter 20 1

Using a Hopfield Network: A Nuts and Bolts Approach

Unit III. A Survey of Neural Network Model

Chapter 9: The Perceptron

Artificial Neural Networks

An Introductory Course in Computational Neuroscience

4. Multilayer Perceptrons

Computational Explorations in Cognitive Neuroscience Chapter 2

Introduction to Neural Networks

Fundamentals of Computational Neuroscience 2e

Processing of Time Series by Neural Circuits with Biologically Realistic Synaptic Dynamics

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

A. The Hopfield Network. III. Recurrent Neural Networks. Typical Artificial Neuron. Typical Artificial Neuron. Hopfield Network.

Neural Networks DWML, /25

Neural Networks. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington

Multi-layer Neural Networks

COMP 551 Applied Machine Learning Lecture 14: Neural Networks

A Biologically-Inspired Model for Recognition of Overlapped Patterns

Artificial Intelligence (AI) Common AI Methods. Training. Signals to Perceptrons. Artificial Neural Networks (ANN) Artificial Intelligence

Hopfield Neural Network and Associative Memory. Typical Myelinated Vertebrate Motoneuron (Wikipedia) Topic 3 Polymers and Neurons Lecture 5

A Logarithmic Neural Network Architecture for Unbounded Non-Linear Function Approximation

18.6 Regression and Classification with Linear Models

Neural Nets and Symbolic Reasoning Hopfield Networks

2- AUTOASSOCIATIVE NET - The feedforward autoassociative net considered in this section is a special case of the heteroassociative net.

Learning and Memory in Neural Networks

Pattern Recognition Prof. P. S. Sastry Department of Electronics and Communication Engineering Indian Institute of Science, Bangalore

Part 8: Neural Networks

Artificial Neural Network and Fuzzy Logic

Linear Regression, Neural Networks, etc.

Artificial neural networks

C4 Phenomenological Modeling - Regression & Neural Networks : Computational Modeling and Simulation Instructor: Linwei Wang

Unit 8: Introduction to neural networks. Perceptrons

Neural Networks and Fuzzy Logic Rajendra Dept.of CSE ASCET

Master Recherche IAC TC2: Apprentissage Statistique & Optimisation

Artificial Neural Networks D B M G. Data Base and Data Mining Group of Politecnico di Torino. Elena Baralis. Politecnico di Torino

Synaptic Devices and Neuron Circuits for Neuron-Inspired NanoElectronics

CS 4700: Foundations of Artificial Intelligence

EPL442: Computational

Neural Networks. Mark van Rossum. January 15, School of Informatics, University of Edinburgh 1 / 28

Dynamic Causal Modelling for evoked responses J. Daunizeau

Lecture 4: Feed Forward Neural Networks

Artificial Neural Networks

An artificial neural networks (ANNs) model is a functional abstraction of the

Artificial Neural Networks Examination, March 2004

Neural Networks, Computation Graphs. CMSC 470 Marine Carpuat

SPSS, University of Texas at Arlington. Topics in Machine Learning-EE 5359 Neural Networks

Simple neuron model Components of simple neuron

Course 395: Machine Learning - Lectures

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

How to do backpropagation in a brain

Introduction to Neural Networks

Decision-making and Weber s law: a neurophysiological model

Artificial Neural Networks

Artificial Neural Networks

Neural Networks (Part 1) Goals for the lecture

Introduction to Neural Networks

Plan. Perceptron Linear discriminant. Associative memories Hopfield networks Chaotic networks. Multilayer perceptron Backpropagation

Introduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis

22c145-Fall 01: Neural Networks. Neural Networks. Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1

Security Analytics. Topic 6: Perceptron and Support Vector Machine

On Computational Limitations of Neural Network Architectures

Artificial Neural Networks Examination, June 2004

Artificial Neural Networks. Edward Gatt

Effects of Interactive Function Forms in a Self-Organized Critical Model Based on Neural Networks

Introduction and Perceptron Learning

Invariant object recognition in the visual system with error correction and temporal difference learning

Recurrent Neural Networks

Artificial Neural Networks. Part 2

Transcription:

Computational Models of Human Cognition

Models A model is a means of representing the structure or workings of a system or object. e.g., model car e.g., economic model e.g., psychophysics model 2500 loudness = 830.876 x intensity.233 2000 loudness (sones) 1500 1000 500 0 0 10 20 30 40 50 60 70 80 90 100 intensity

Computational Models Models expressed as computer programs (sequence of instructions) or as complex mathematical equations that require simulation. e.g., air flow e.g., Second Life (3d virtual reality world)

Computational Models of Human Cognition Computer simulation of neural and/or cognitive processes that underlie performance on a task

Goals of Computational Modeling in Cog Sci Understand mechanisms of information processing in the brain Explain behavioral, neuropsychological, and neuroscientific data Suggest techniques for remediation of cognitive deficits due to brain injury and developmental disorders Suggest techniques for facilitating learning in normal cognition Construct computer architectures to mimic human-like intelligence

Why Build Models? Forces you to be explicit about hypotheses and assumptions Provides a framework for integrating knowledge from various fields Allows you to observe complex interactions among hypotheses Provides ultimate in controlled experiment Leads to empirical predictions A mechanistic framework will ultimately be required to provide a unified theory of cortex.

Levels of Modeling Single cell ion flow, membrane depolarization, neurotransmitter release, action potentials, neuromodulatory interactions Network neurophysiology and neuroanatomy of cortical regions, cell firing patterns, inhibitory interactions, mechanisms of learning Functional operation and interaction of cortical areas, transformation of representations Computational input-output behavior, mathematical characterization of computation

Overview Computational modeling Artificial neural networks Modeling performance after brain damage

Key Features of Cortical Computation Neurons are slow (10 3 10 2 propagation time) Large number of neurons (10 10 10 11 ) No central controller (CPU) Neurons receive input from a large number of other neurons (10 4 fan-in and fanout of cortical pyramidal cells) Communication via excitation and inhibition Statistical decision making (neurons that single-handedly turn on/off other neurons are rare) Learning involves modifying coupling strengths (the tendency of one cell to excite/inhibit another) Neural hardware is dedicated to particular tasks (vs. conventional computer memory) Information is conveyed by mean firing rate of neuron, a.k.a. activation

Modeling Individual Neurons x 1 x 2 x 3 x 4. x n w 2 w 1 w 3 w 4 flow of activation w n b y input activities and weights bias output activity

Modeling Individual Neurons Activation function net = i w i x i + b f net f net f net y = f( net) linear binary threshold sigmoid x 1 x 2 x 3 x 4. x n w 2 w 1 w 3 w 4 w n b y input activities and weights bias output activity

Computation With a Binary Threshold Unit Or gate x1 x2 y 0 0 0 0 1 1 1 0 1 1 1 1 y 0 1 1 x 1 x 2

Computation With a Binary Threshold Unit And gate x1 x2 y 0 0 0 0 1 0 1 0 0 1 1 1 y 1 1 1 x 1 x 2

Computation With a Binary Threshold Unit Exclusive or gate x1 x2 y 0 0 0 0 1 1 1 0 1 1 1 0 y = z 1 and not z 2 1 0 1 z 2 = x 1 and x 2 1 0 z 1 = x 1 or x 2 1 1 1 1 x 1 x 2

Feedforward Architectures flow of activity Activation flows in one direction; no closed loops Performs association from input pattern to output pattern big, hairy, stinky run away small, round, orange eat big, round, soft eat small, orange, hairy run away stinky, yellow eat Learning: adjust connections to achieve input-output mapping

Recurrent Architectures Achieves best interpretation of partial or noisy patterns, e.g., MAR M LLOW State space dynamics Attractor dynamics neuron 2 activity neuron 1 activity Learning: establishes new attractors and shifts attractor basin boundaries

Necker Cube Example Each vertex has two possible interpretations. in back in front in front in back Interpretation of one vertex depends on interpretation of other vertices. Constraint satisfaction problem (suitable for attractor net)

Necker Cube Demo See http://www.cs.cf.ac.uk/dave/java/boltzman/necker.html

Supervised Learning in Neural Networks 1. Assume a set of training examples, {x i, d i } e.g., MAR M LLOW MARSHMALLOW e.g., big, hairy, stinky run away 2. Define a measure of network performance, e.g., i E = d i ""y i 2 E 3. Make small incremental changes to weights to decrease error (gradient descent), i.e., w w ji E w ji For multilayered sigmoidal neural networks, gradient descent update has a simple local form (depends on activity of neuron i and error associated with neuron j)

Modeling Neuropsychological Phenomena Michael C. Mozer Mark Sitton Department of Computer Science and Institute of Cognitive Science University of Colorado, Boulder Martha Farah Department of Psychology University of Pennsylvania

Optic Aphasia Neuropathology: unilateral left posterior lesions Deficit in naming visually presented objects, in the absence of visual agnosia and general anomia Nonverbal indications of recognition: sorting, gesturing Naming possible given verbal definition, tactile stimulation, object sounds

Modeling Naming and Gesturing name gesture semantic visual auditory Each arrow represents a processing pathway (neural net) Pathway act as associative memories

Simple Lesion Cannot Explain Optic Aphasia name gesture name gesture semantic semantic visual auditory visual auditory

More Complex Architectures Are Unparsimonious name semantic visual semantics name verbal semantics visual auditory visual auditory name gesture name left hemi semantics right hemi semantics semantic visual auditory visual auditory

Alternative Explanation Partial damage to two systems (Farah, 1990) superadditive effect of damage name gesture semantic visual auditory Simulation by Sitton, Mozer, and Farah (2000)

Neural Network Implementation of Pathway pathway output clean up: recurrent attractor network mapping: multilayer feedforward network pathway input input space mapping clean up output space

Model Dynamics attractor units a(t) s(t) e(t) state units external input attractor unit update equation: â j ( t) = exp( s( t) µ j ) a j ( t) = â j ( t) â i ( t) state unit update equation: 2 β j s i ( t) = h d i ( t)e i ( t) + ( 1 d i ( t) ) a j ( t 1)µ ji j µ j β j : center of attractor j : width of attractor j d(t) t d i ( t) = 1 e i ( t 1) e i ( t) e i ( t) = α e i ( t) + ( 1 α)e i ( t 1)

Key Properties of Neural Network Pathway Gradual convergence of pathway output on best interpretation over time Continuous availability of information from other pathways

Simulation Methodology Define neural activity patterns in visual, auditory, semantic, name, and gesture spaces Pair patterns randomly Train the four pathways to produce correct associations Lesion model Remove 30% of connections in V S and S N pathways Evaluate lesioned model performance

Error Rates by Task task error rate damaged pathways A G 0.0% A N 0.5% S N V G 8.7% V S V N 36.8% V S, S N name visual semantic gesture auditory A N: clean up compensates for S N pathway damage V G: clean up compensates for V S pathway damage V N: effects of damage to V S and S N pathways interact noisy input + internal damage to S N pathway undamaged Interaction would not occur if (a) pathways operated sequentially, or (b) pathways showed no hysteresis quality of output damaged time

Error Rates Based on Relative Damage 100 damage to the S N pathway 70% 60% 50% 40% 30% 20% 10% 50 0 100 50 0 100 50 0 100 50 0 100 50 0 100 50 0 100 50 0 10% 20% 30% 40% 50% 60% 70% damage to the V S pathway

Distribution of Errors for Visual Object Naming 1 0.9 actual chance Proportion of total errors 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 semantic visual vis+sem perseverative other Error type

Interactivity in Brain Damage Neuropsychological disorders have traditionally been explained by a focal lesion to a single processing pathway. Farah (1990) argued that certain highlyselective deficits might have a parsimonious account in terms of multiple lesions with interactive effects. The model illustrates the viability of this account.

Value of the Model Past accounts have claimed the cognitive architecture is complex and unparsimonious. multiple semantics systems or multiple functional pathways to naming Instead, model can explain optic aphasia via a simple cognitive architecture and multiple lesions (each with a single dimension of selectivity). Model can explain other aspects of phenomenon e.g., naming errors tend to be semantic or perseverative, not visual Model might be useful for understanding severity of lesions.