Neural Computation, Analogical Promiscuity, and the Induction of Semantic Roles: A Preliminary Sketch

Similar documents
Dynamical Cognition 2010: New Approaches to Some Tough Old Problems. Simon D. Levy Washington & Lee University Lexington, Virginia, USA

How to build a brain. Cognitive Modelling. Terry & Chris Centre for Theoretical Neuroscience University of Waterloo

A biologically realistic cleanup memory: Autoassociation in spiking neurons

Large Patterns Make Great Symbols: An Example of Learning from Example

Representing structured relational data in Euclidean vector spaces. Tony Plate

CSCI 252: Neural Networks and Graphical Models. Fall Term 2016 Prof. Levy. Architecture #7: The Simple Recurrent Network (Elman 1990)

A thesis submitted to the. School of Computing. in conformity with the requirements for. the degree of Master of Science. Queen s University

Artificial Neural Networks. Q550: Models in Cognitive Science Lecture 5

Machine Learning. Neural Networks

Representing Objects, Relations, and Sequences

Learning Symbolic Inferences with Neural Networks

Propositions. c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 5.1, Page 1

Artificial Neural Networks. Introduction to Computational Neuroscience Tambet Matiisen

REAL-TIME COMPUTING WITHOUT STABLE

Neural Nets and Symbolic Reasoning Hopfield Networks

Revealing inductive biases through iterated learning

Sample Exam COMP 9444 NEURAL NETWORKS Solutions

Chapter 1. Introduction

arxiv:cs.ai/ v2 23 Oct 2006

Introduction to Semantics. Common Nouns and Adjectives in Predicate Position 1

The Ordinal Serial Encoding Model: Serial Memory in Spiking Neurons

Introduction to Semantics. The Formalization of Meaning 1

Cloning Composition and Logical Inferences in Neural Networks Using Variable-Free Logic

Staying on the path Fred Rothganger

EXTENDED FUZZY COGNITIVE MAPS

Pattern Recognition Using Associative Memories

Neural Networks. Mark van Rossum. January 15, School of Informatics, University of Edinburgh 1 / 28

Lecture 4: Feed Forward Neural Networks

Aristotle s Definition of Kinêsis: Physics III.1

Perceptron. (c) Marcin Sydow. Summary. Perceptron

Using a Hopfield Network: A Nuts and Bolts Approach

Quantum Computation via Sparse Distributed Representation

Computational Models of Human Cognition

Overview. Discrete Event Systems Verification of Finite Automata. What can finite automata be used for? What can finite automata be used for?

Propositional Logic: Logical Agents (Part I)

CSE 5526: Introduction to Neural Networks Hopfield Network for Associative Memory

Quantum Probability in Cognition. Ryan Weiss 11/28/2018

McGill University > Schulich School of Music > MUMT 611 > Presentation III. Neural Networks. artificial. jason a. hockman

Pei Wang( 王培 ) Temple University, Philadelphia, USA

Deep Learning for NLP Part 2

Keywords- Source coding, Huffman encoding, Artificial neural network, Multilayer perceptron, Backpropagation algorithm

Supervised (BPL) verses Hybrid (RBF) Learning. By: Shahed Shahir

Multilayer Perceptron Tutorial

Geometric Analogue of Holographic Reduced Representations

Structural Induction

SPEAKING MATHEMATICALLY. Prepared by Engr. John Paul Timola

Latent Dirichlet Allocation Introduction/Overview

Generative Models for Sentences

Neural Networks 1 Synchronization in Spiking Neural Networks

Sin, Cos and All That

Deduction by Daniel Bonevac. Chapter 3 Truth Trees

Seminar in Semantics: Gradation & Modality Winter 2014

Using Variable Threshold to Increase Capacity in a Feedback Neural Network

COMP304 Introduction to Neural Networks based on slides by:

Neural Networks. Fundamentals Framework for distributed processing Network topologies Training of ANN s Notation Perceptron Back Propagation

Propositional Logic: Logical Agents (Part I)

The Systems Biology Graphical Notation

Logic: Intro & Propositional Definite Clause Logic

Experiments on the Consciousness Prior

In biological terms, memory refers to the ability of neural systems to store activity patterns and later recall them when required.

Back-propagation as reinforcement in prediction tasks

To cognize is to categorize revisited: Category Theory is where Mathematics meets Biology

2- AUTOASSOCIATIVE NET - The feedforward autoassociative net considered in this section is a special case of the heteroassociative net.

Neural Networks Introduction CIS 32

Extensions to the Logic of All x are y: Verbs, Relative Clauses, and Only

Plan. Perceptron Linear discriminant. Associative memories Hopfield networks Chaotic networks. Multilayer perceptron Backpropagation

Logic: Bottom-up & Top-down proof procedures

Neural Networks (Part 1) Goals for the lecture

Bayesian Machine Learning

Introduction to Metalogic

16. . Proceeding similarly, we get a 2 = 52 1 = , a 3 = 53 1 = and a 4 = 54 1 = 125

Artificial Neural Networks D B M G. Data Base and Data Mining Group of Politecnico di Torino. Elena Baralis. Politecnico di Torino

Dynamic Working Memory in Recurrent Neural Networks

Sharpening the empirical claims of generative syntax through formalization

The Role of Roles in Translating Across Conceptual Systems

Research on Object-Oriented Geographical Data Model in GIS

Spiking Neural P Systems and Modularization of Complex Networks from Cortical Neural Network to Social Networks

Another look at Bayesian. inference

ARTIFICIAL INTELLIGENCE. Artificial Neural Networks

What can logic do for AI? David McAllester TTI-Chicago

University of Genova - DITEN. Smart Patrolling. video and SIgnal Processing for Telecommunications ISIP40

Ling 510: Lab 2 Ordered Pairs, Relations, and Functions Sept. 16, 2013

The roots of computability theory. September 5, 2016

arxiv: v1 [cs.ds] 18 Mar 2011

On Computational Limitations of Neural Network Architectures

5. And. 5.1 The conjunction

5. And. 5.1 The conjunction

Intermediate Logic Spring. Second-Order Logic

Ling 130 Notes: Syntax and Semantics of Propositional Logic

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Recurrent Neural Network

Modal Dependence Logic

Week 4: Hopfield Network

In this section, we review some basics of modeling via linear algebra: finding a line of best fit, Hebbian learning, pattern classification.

t Correspondence should be addressed to the author at the Department of Psychology, Stanford University, Stanford, California, 94305, USA.

Artificial Neural Network and Fuzzy Logic

An Ontology Diagram for Coordination of the Hylomorphically Treated Entities

16.3. In times before global positioning systems (GPS) or navigation devices, hikers would. The Inverse Undoes What a Function Does

Lecture 11 Recurrent Neural Networks I

Transcription:

Neural Computation, Analogical Promiscuity, and the Induction of Semantic Roles: A Preliminary Sketch Simon D. Levy Washington & Lee University Lexington, Virginia, USA Ross Gayler Melbourne, Australia

Inspiration(s) [I]t turns out that we don t think the way we think we think!... The scientific evidence coming in all around us is clear: Symbolic conscious reasoning, which is extracted through protocol analysis from serial verbal introspection, is a myth. J. Pollack [W]hat kinds of things suggested by the architecture of the brain, if we modeled them mathematically, could give some properties that we associate with mind? P. Kanerva

Two Dogs of Empiricism* 1. The Short Circuit (Localist) Approach i) Traditional models of phenomenon X (language) use entities A, B, C,... (Noun Phrase, Phoneme,...) ii) We wish to model X in a more biologically realistic way. iii) Therefore our model of X will have a neuron (pool) for A, one for B, one for C, etc. * with apologies to W.V.O Quine

E.g. Neural Blackboard Model (van der Velde & de Kamps 2006)

Benefits of Localism (Page 2000) Transparent (one node, one concept) Supports lateral inhibition / winner-takes all

Problems with Localism Philosophical problem: fresh coat of paint on old rotting theories (MacLennan 1991): what new insights does neuro-x provide? Engineering problem: need to recruit new hardware for each new concept/ combination leads to combinatorial explosion (Stewart & Eliasmith 2008)

The Appeal of Distributed Representations (Rumelhart & McClelland & al. 1986)

WALKED WALK

ROARED ROAR

SPOKE SPEAK

WENT GO

Two Dogs of Empiricism 2. The Homunculus problem, a.k.a. Ghost in the Machine(Ryle 1949) In cognitive modeling, the homunculus is the researcher: supervises learning, hand-builds representations, etc.

Beyond Associationism Mary won t give John the time of day. ignores(mary, john)

The Binding Problem +????

The Problem of Two +?

The Problem of Variables ignores(x, Y) X won t give Y the time of day.

Vector Symbolic Architectures (Plate 1991; Kanerva 1994; Gayler 1998)

Tensor Product Binding (Smolensky 1990)

Binding

Bundling + =

Unbinding (query)

Lossy

Lossy

Cleanup Hebbian / Hopfield / Attractor Net

Reduction (HRR)

Reduction (MAP/BSC)

Composition / Recursion

Variables john X

Recent Applications Modeling Surface and Structural Properties in Analogy Processing (Eliasmith & Thagard 2001) Variables & Quantification / Wason Task (Eliasmith 2005) Representing Word Order in a Holographic Lexicon (Jones & Mewhort 2007)

Banishing the Homunculus

Step I: Automatic Variable Substiution If A is a vector over {+1,-1}, then A*A = vector of 1 s (multiplicative identity) Supports substitution of anything for anything: everything (names, individuals, structures, propositions) can be a variable)!

What is the Dollar of Mexico? (Kanerva, to appear) Let X = <country>, Y = <currency>, A = <USA>, B = <Mexico> Then A = X*U + Y*D, D*A*B = D*(X*U + Y*D) * (X*M + Y*P) = (D*X*U + D*Y*D) * (X*M + Y*P) = (D*X*U + Y) * (X*M + Y*P) = D*X*U*X*M + D*X*U*Y*P + Y*X*M + Y*Y*P = P + noise B = X*M + Y*P

Learning Grammatical Constructions from a Single Example (Levy, to appear) Given Meaning: KISS(MARY, JOHN) Form: Mary kissed John Lexicon: KISS/kiss, MARY/Mary,... What is the form for HIT(BILL, FRED)?

Learning Grammatical Constructions from a Single Example (Levy, to appear) (ACTION*KISS + AGENT*MARY + PATIENT*JOHN) * (P1*Mary + P2*kissed + P3*John) * (KISS*kissed + MAY*Mary + JOHN*John + BILL*Bill + FRED*Fred + HIT*hit) * (ACTION*HIT + AGENT*BILL + PATIENT*FRED) =... = (P1*Bill + P2*hit + P3*Fred) + noise

Step II: Distributed Lateral Inhibition Analogical mapping as holistic graph isomorphsm (Gayler & Levy, in progress) A P B Q C D R S cf. Pelillo (1999)

A B P Q C D R S Possibilities x: A*P + A*Q + A*R + A*S +... + D*S Evidence w: A*B*P*Q + A*B*P*R +...+ B*C*Q*R +.. + C*D*R*S X*W = A*Q + B*R +... + A*P +... + D*S

xt w * cleanup / πt xt+1 c c

Step III: Automatic (De)composition of Entities MSC (Arathorn 2002)