Dynamical Cognition 2010: New Approaches to Some Tough Old Problems. Simon D. Levy Washington & Lee University Lexington, Virginia, USA

Size: px
Start display at page:

Download "Dynamical Cognition 2010: New Approaches to Some Tough Old Problems. Simon D. Levy Washington & Lee University Lexington, Virginia, USA"

Transcription

1 Dynamical Cognition 2010: New Approaches to Some Tough Old Problems Simon D. Levy Washington & Lee University Lexington, Virginia, USA

2 Inspiration,

3 Inspiration, 1995-present [I]t turns out that we don t think the way we think we think!... The scientific evidence coming in all around us is clear: Symbolic conscious reasoning, which is extracted through protocol analysis from serial verbal introspection, is a myth. J. Pollack (2005) [W]hat kinds of things suggested by the architecture of the brain, if we modeled them mathematically, could give some properties that we associate with mind? P. Kanerva (2009)... a fresh coat of paint on old rotting theories. B. MacLennan (1991)

4 What is Mind?

5

6

7

8 The Need for New Representational Principles Ecological affordances (Gibson 1979); exploiting the environment (Clark 1998) Distributed/Connectionist Representations (PDP 1986) Holographic Representations (Gabor 1971; Plate 2003) Fractals / Attractors / Dynamical Systems (Tabor 2000; Levy & Pollack 2001)

9 The Need for New Representational Principles Ecological affordances (Gibson 1979); exploiting the environment (Clark 1998) Distributed/Connectionist Representations (PDP 1986) Holographic Representations (Gabor 1971; Plate 2003) Fractals / Attractors / Dynamical Systems (Tabor 2000; Levy & Pollack 2001)

10 Pitfalls to Avoid 1. The Short Circuit (Localist Connectionist) Approach i) Traditional models of phenomenon X (language) use entities A, B, C,... (Noun Phrase, Phoneme,...) ii) We wish to model X in a more biologically realistic way. iii) Therefore our model of X will have a neuron (pool) for A, one for B, one for C, etc.

11 a.k.a. The Reese s Peanut Butter Cup Model

12 E.g. Neural Blackboard Model (van der Velde & de Kamps 2006)

13 Benefits of Localism (Page 2000) Transparent (one node, one concept) Supports lateral inhibition / winner-takes all

14 Lateral Inhibition (WTA) L 2 A B C L 1

15 Problems with Localism Philosophical problem: fresh coat of paint on old rotting theories (MacLennan 1991): what new insights does neuro-x provide? Engineering problem: need to recruit new hardware for each new concept/ combination leads to combinatorial explosion (Stewart & Eliasmith 2008)

16 The Appeal of Distributed Representations (Rumelhart & McClelland 1986)

17 WALKED WALK

18 ROARED ROAR

19 SPOKE SPEAK

20 WENT GO

21 Mary won t give John the time of day. ignores(mary, john)

22 Challenges (Jackendoff 2002)

23 I. The Binding Problem +????

24 II. The Problem of Two +???

25 III. The Problem of Variables ignores(x, Y) X won t give Y the time of day.

26 Vector Symbolic Architectures (Plate 1991; Kanerva 1994; Gayler 1998)

27 Tensor Product Binding (Smolensky 1990)

28 Binding

29 Bundling + =

30 Unbinding (query)

31 Lossy

32 Lossy

33 Cleanup Hebbian / Hopfield / Attractor Net

34 Reduction (Holographic; Plate 2003)

35 Reduction (Binary; Kanerva 1994, Gayler 1998)

36 Composition / Recursion

37 Variables john X

38 Scaling Up With many (> 10K) dimensions, get Astronomically large # of mutually orthogonal vectors (symbols) Surprising robustness to noise

39 Pitfalls to Avoid 2. The Homunculus problem, a.k.a. Ghost in the Machine (Ryle 1949) In cognitive modeling, the homunculus is the researcher: supervises learning, hand-builds representations, etc.

40 Banishing the Homunculus

41 Step I: Automatic Variable Substitution If A is a vector over {+1,-1}, then A*A = vector of 1 s (multiplicative identity) Supports substitution of anything for anything: everything (names, individuals, structures, propositions) can be a variable!

42 What is the Dollar of Mexico? (Kanerva 2009) Let X = <country>, Y = <currency>, A = <USA>, B = <Mexico> Then A = X*U + Y*D, D*A*B = D*(X*U + Y*D) * (X*M + Y*P) = B = X*M + Y*P (D*X*U + D*Y*D) * (X*M + Y*P) = (D*X*U + Y) * (X*M + Y*P) = D*X*U*X*M + D*X*U*Y*P + Y*X*M + Y*Y*P = P + noise

43 Learning Grammatical Constructions from a Single Example (Levy 2010) Given Meaning: KISS(MARY, JOHN) Form: Mary kissed John Lexicon: KISS/kiss, MARY/Mary,... What is the form for HIT(BILL, FRED)?

44 Learning Grammatical Constructions from a Single Example (Levy 2010) (ACTION*KISS + AGENT*MARY + PATIENT*JOHN) * (P1*Mary + P2*kissed + P3*John) * (KISS*kissed + MAY*Mary + JOHN*John + BILL*Bill + FRED*Fred + HIT*hit) * (ACTION*HIT + AGENT*BILL + PATIENT*FRED) =... = (P1*Bill + P2*hit + P3*Fred) + noise

45 Step II: Distributed Lateral Inhibition Analogical mapping as holistic graph isomorphsm (Gayler & Levy 2009) A P B Q C D R S cf. Pelillo (1999)

46 A P B Q C D R S Possibilities x: A*P + A*Q + A*R + A*S D*S Evidence w: A*B*P*Q + A*B*P*R B*C*Q*R C*D*R*S X*W = A*Q + B*R A*P D*S What kind of program could work with these data structures to yield a single consistent mapping?

47 Replicator Equations Starting at some initial state (typically just xi = 1/N corresponding to all x i being equally supported as part of the solution), x can be obtained through iterative application of the following equation: where and w is a linear function of the adjacency matrix of the association graph ( evidence matrix ).

48 Replicator Equations Origins in Evolutionary Game Theory (Maynard Smith 1982) xi is a strategy (belief in a strategy) πi is the overall payoff from that strategy wij is the utility of playing strategy i against strategy j Can be interpreted as a continuous inference equation whose discrete-time version has a formal similarity to Bayesian inference (Harper 2009)

49 Localist Implementation Results (Pelillo 1999)

50 VSA Lateral Inhibition Circuit (Levy & Gayler 2009) xt w * cleanup / πt xt+1 c c

51 VSA Implementation Results tinyurl.com/gidemo

52 Conclusions Vector Symbolic Architectures: A new kind of distributed representation for cognitive computing robust to noise rapid (one-shot) learning everything is a variable solves complicated problems in parallel Replicator equations: Dynamical system from evolutionary game theory, adapted to solve graph problems (analogies); can be made more plausible by using VSA instead of localist representation

53 Current / Future Work Subgraph mapping D E B C A P R Q S Using Map- Seeking Circuits (Arathorn 2002) to isolate sub-parts

Neural Computation, Analogical Promiscuity, and the Induction of Semantic Roles: A Preliminary Sketch

Neural Computation, Analogical Promiscuity, and the Induction of Semantic Roles: A Preliminary Sketch Neural Computation, Analogical Promiscuity, and the Induction of Semantic Roles: A Preliminary Sketch Simon D. Levy Washington & Lee University Lexington, Virginia, USA Ross Gayler Melbourne, Australia

More information

A biologically realistic cleanup memory: Autoassociation in spiking neurons

A biologically realistic cleanup memory: Autoassociation in spiking neurons Available online at www.sciencedirect.com Cognitive Systems Research 12 (2011) 84 92 www.elsevier.com/locate/cogsys A biologically realistic cleanup memory: Autoassociation in spiking neurons Action editor:

More information

How to build a brain. Cognitive Modelling. Terry & Chris Centre for Theoretical Neuroscience University of Waterloo

How to build a brain. Cognitive Modelling. Terry & Chris Centre for Theoretical Neuroscience University of Waterloo How to build a brain Cognitive Modelling Terry & Chris Centre for Theoretical Neuroscience University of Waterloo So far... We have learned how to implement high-dimensional vector representations linear

More information

Large Patterns Make Great Symbols: An Example of Learning from Example

Large Patterns Make Great Symbols: An Example of Learning from Example Large Patterns Make Great Symbols: An Example of Learning from Example Pentti Kanerva RWCP 1 Theoretical Foundation SICS 2 Laboratory SICS, Box 1263, SE-164 29 Kista, Sweden e-mail: kanerva@sics.se Abstract

More information

Machine Learning. Neural Networks

Machine Learning. Neural Networks Machine Learning Neural Networks Bryan Pardo, Northwestern University, Machine Learning EECS 349 Fall 2007 Biological Analogy Bryan Pardo, Northwestern University, Machine Learning EECS 349 Fall 2007 THE

More information

Representing structured relational data in Euclidean vector spaces. Tony Plate

Representing structured relational data in Euclidean vector spaces. Tony Plate Representing structured relational data in Euclidean vector spaces Tony Plate tplate@acm.org http://www.d-reps.org October 2004 AAAI Symposium 2004 1 Overview A general method for representing structured

More information

Artificial Neural Networks. Q550: Models in Cognitive Science Lecture 5

Artificial Neural Networks. Q550: Models in Cognitive Science Lecture 5 Artificial Neural Networks Q550: Models in Cognitive Science Lecture 5 "Intelligence is 10 million rules." --Doug Lenat The human brain has about 100 billion neurons. With an estimated average of one thousand

More information

CSCI 252: Neural Networks and Graphical Models. Fall Term 2016 Prof. Levy. Architecture #7: The Simple Recurrent Network (Elman 1990)

CSCI 252: Neural Networks and Graphical Models. Fall Term 2016 Prof. Levy. Architecture #7: The Simple Recurrent Network (Elman 1990) CSCI 252: Neural Networks and Graphical Models Fall Term 2016 Prof. Levy Architecture #7: The Simple Recurrent Network (Elman 1990) Part I Multi-layer Neural Nets Taking Stock: What can we do with neural

More information

Revealing inductive biases through iterated learning

Revealing inductive biases through iterated learning Revealing inductive biases through iterated learning Tom Griffiths Department of Psychology Cognitive Science Program UC Berkeley with Mike Kalish, Brian Christian, Simon Kirby, Mike Dowman, Stephan Lewandowsky

More information

Quantum Computation via Sparse Distributed Representation

Quantum Computation via Sparse Distributed Representation 1 Quantum Computation via Sparse Distributed Representation Gerard J. Rinkus* ABSTRACT Quantum superposition states that any physical system simultaneously exists in all of its possible states, the number

More information

Representing Objects, Relations, and Sequences

Representing Objects, Relations, and Sequences Representing Objects, Relations, and Sequences Stephen I. Gallant and T. Wendy Okaywe Pitney Bowes MultiModel Research MultiModel Research sgallant@mmres.com, wokaywe@mmres.com February 2, 2013 Late draft

More information

In biological terms, memory refers to the ability of neural systems to store activity patterns and later recall them when required.

In biological terms, memory refers to the ability of neural systems to store activity patterns and later recall them when required. In biological terms, memory refers to the ability of neural systems to store activity patterns and later recall them when required. In humans, association is known to be a prominent feature of memory.

More information

CHAPTER 3. Pattern Association. Neural Networks

CHAPTER 3. Pattern Association. Neural Networks CHAPTER 3 Pattern Association Neural Networks Pattern Association learning is the process of forming associations between related patterns. The patterns we associate together may be of the same type or

More information

Using a Hopfield Network: A Nuts and Bolts Approach

Using a Hopfield Network: A Nuts and Bolts Approach Using a Hopfield Network: A Nuts and Bolts Approach November 4, 2013 Gershon Wolfe, Ph.D. Hopfield Model as Applied to Classification Hopfield network Training the network Updating nodes Sequencing of

More information

Course Structure. Psychology 452 Week 12: Deep Learning. Chapter 8 Discussion. Part I: Deep Learning: What and Why? Rufus. Rufus Processed By Fetch

Course Structure. Psychology 452 Week 12: Deep Learning. Chapter 8 Discussion. Part I: Deep Learning: What and Why? Rufus. Rufus Processed By Fetch Psychology 452 Week 12: Deep Learning What Is Deep Learning? Preliminary Ideas (that we already know!) The Restricted Boltzmann Machine (RBM) Many Layers of RBMs Pros and Cons of Deep Learning Course Structure

More information

To cognize is to categorize revisited: Category Theory is where Mathematics meets Biology

To cognize is to categorize revisited: Category Theory is where Mathematics meets Biology To cognize is to categorize revisited: Category Theory is where Mathematics meets Biology Autonomous Systems Laboratory Technical University of Madrid Index 1. Category Theory 2. The structure-function

More information

Neural Networks (Part 1) Goals for the lecture

Neural Networks (Part 1) Goals for the lecture Neural Networks (Part ) Mark Craven and David Page Computer Sciences 760 Spring 208 www.biostat.wisc.edu/~craven/cs760/ Some of the slides in these lectures have been adapted/borrowed from materials developed

More information

Neural Nets and Symbolic Reasoning Hopfield Networks

Neural Nets and Symbolic Reasoning Hopfield Networks Neural Nets and Symbolic Reasoning Hopfield Networks Outline The idea of pattern completion The fast dynamics of Hopfield networks Learning with Hopfield networks Emerging properties of Hopfield networks

More information

Using Variable Threshold to Increase Capacity in a Feedback Neural Network

Using Variable Threshold to Increase Capacity in a Feedback Neural Network Using Variable Threshold to Increase Capacity in a Feedback Neural Network Praveen Kuruvada Abstract: The article presents new results on the use of variable thresholds to increase the capacity of a feedback

More information

Knowledge Extraction from DBNs for Images

Knowledge Extraction from DBNs for Images Knowledge Extraction from DBNs for Images Son N. Tran and Artur d Avila Garcez Department of Computer Science City University London Contents 1 Introduction 2 Knowledge Extraction from DBNs 3 Experimental

More information

COMP-4360 Machine Learning Neural Networks

COMP-4360 Machine Learning Neural Networks COMP-4360 Machine Learning Neural Networks Jacky Baltes Autonomous Agents Lab University of Manitoba Winnipeg, Canada R3T 2N2 Email: jacky@cs.umanitoba.ca WWW: http://www.cs.umanitoba.ca/~jacky http://aalab.cs.umanitoba.ca

More information

Chapter 1. Introduction

Chapter 1. Introduction Chapter 1 Introduction Symbolical artificial intelligence is a field of computer science that is highly related to quantum computation. At first glance, this statement appears to be a contradiction. However,

More information

Artificial Neural Networks. Introduction to Computational Neuroscience Tambet Matiisen

Artificial Neural Networks. Introduction to Computational Neuroscience Tambet Matiisen Artificial Neural Networks Introduction to Computational Neuroscience Tambet Matiisen 2.04.2018 Artificial neural network NB! Inspired by biology, not based on biology! Applications Automatic speech recognition

More information

arxiv:cs.ai/ v2 23 Oct 2006

arxiv:cs.ai/ v2 23 Oct 2006 On Geometric Algebra representation of Binary Spatter Codes Diederik Aerts 1, Marek Czachor 1,2, and Bart De Moor 3 1 Centrum Leo Apostel (CLEA) and Foundations of the Exact Sciences (FUND) Brussels Free

More information

Cloning Composition and Logical Inferences in Neural Networks Using Variable-Free Logic

Cloning Composition and Logical Inferences in Neural Networks Using Variable-Free Logic Cloning Composition and Logical Inferences in Neural Networks Using Variable-Free Logic Helmar Gust and Kai Uwe Kühnberger Institute of Cognitive Science Katharinenstr. 24 University of Osnabrück {hgust,

More information

Learning Symbolic Inferences with Neural Networks

Learning Symbolic Inferences with Neural Networks Learning Symbolic Inferences with Neural Networks Helmar Gust (hgust@uos.de) Institute of Cognitive Science, University of Osnabrück Katharinenstr. 4, 4969 Osnabrück, Germany Kai-Uwe Kühnberger (kkuehnbe@uos.de)

More information

Pattern Recognition Using Associative Memories

Pattern Recognition Using Associative Memories Pattern Recognition Using Associative Memories Nathan John Burles Submitted for the degree of Doctor of Philosophy University of York Department of Computer Science April 2014 Dedication For Victoria Abstract

More information

A thesis submitted to the. School of Computing. in conformity with the requirements for. the degree of Master of Science. Queen s University

A thesis submitted to the. School of Computing. in conformity with the requirements for. the degree of Master of Science. Queen s University ADVANCING THE THEORY AND UTILITY OF HOLOGRAPHIC REDUCED REPRESENTATIONS by MATTHEW ALEXANDER KELLY A thesis submitted to the School of Computing in conformity with the requirements for the degree of Master

More information

t Correspondence should be addressed to the author at the Department of Psychology, Stanford University, Stanford, California, 94305, USA.

t Correspondence should be addressed to the author at the Department of Psychology, Stanford University, Stanford, California, 94305, USA. 310 PROBABILISTIC CHARACTERIZATION OF NEURAL MODEL COMPUTATIONS Richard M. Golden t University of Pittsburgh, Pittsburgh, Pa. 15260 ABSTRACT Information retrieval in a neural network is viewed as a procedure

More information

Latent Dirichlet Allocation Introduction/Overview

Latent Dirichlet Allocation Introduction/Overview Latent Dirichlet Allocation Introduction/Overview David Meyer 03.10.2016 David Meyer http://www.1-4-5.net/~dmm/ml/lda_intro.pdf 03.10.2016 Agenda What is Topic Modeling? Parametric vs. Non-Parametric Models

More information

Logic: Intro & Propositional Definite Clause Logic

Logic: Intro & Propositional Definite Clause Logic Logic: Intro & Propositional Definite Clause Logic Alan Mackworth UBC CS 322 Logic 1 February 27, 2013 P & M extbook 5.1 Lecture Overview Recap: CSP planning Intro to Logic Propositional Definite Clause

More information

Overview. Discrete Event Systems Verification of Finite Automata. What can finite automata be used for? What can finite automata be used for?

Overview. Discrete Event Systems Verification of Finite Automata. What can finite automata be used for? What can finite automata be used for? Computer Engineering and Networks Overview Discrete Event Systems Verification of Finite Automata Lothar Thiele Introduction Binary Decision Diagrams Representation of Boolean Functions Comparing two circuits

More information

Supervised (BPL) verses Hybrid (RBF) Learning. By: Shahed Shahir

Supervised (BPL) verses Hybrid (RBF) Learning. By: Shahed Shahir Supervised (BPL) verses Hybrid (RBF) Learning By: Shahed Shahir 1 Outline I. Introduction II. Supervised Learning III. Hybrid Learning IV. BPL Verses RBF V. Supervised verses Hybrid learning VI. Conclusion

More information

Decision-Theoretic Specification of Credal Networks: A Unified Language for Uncertain Modeling with Sets of Bayesian Networks

Decision-Theoretic Specification of Credal Networks: A Unified Language for Uncertain Modeling with Sets of Bayesian Networks Decision-Theoretic Specification of Credal Networks: A Unified Language for Uncertain Modeling with Sets of Bayesian Networks Alessandro Antonucci Marco Zaffalon IDSIA, Istituto Dalle Molle di Studi sull

More information

Perceptron. (c) Marcin Sydow. Summary. Perceptron

Perceptron. (c) Marcin Sydow. Summary. Perceptron Topics covered by this lecture: Neuron and its properties Mathematical model of neuron: as a classier ' Learning Rule (Delta Rule) Neuron Human neural system has been a natural source of inspiration for

More information

Quantum Probability in Cognition. Ryan Weiss 11/28/2018

Quantum Probability in Cognition. Ryan Weiss 11/28/2018 Quantum Probability in Cognition Ryan Weiss 11/28/2018 Overview Introduction Classical vs Quantum Probability Brain Information Processing Decision Making Conclusion Introduction Quantum probability in

More information

REAL-TIME COMPUTING WITHOUT STABLE

REAL-TIME COMPUTING WITHOUT STABLE REAL-TIME COMPUTING WITHOUT STABLE STATES: A NEW FRAMEWORK FOR NEURAL COMPUTATION BASED ON PERTURBATIONS Wolfgang Maass Thomas Natschlager Henry Markram Presented by Qiong Zhao April 28 th, 2010 OUTLINE

More information

Extended IR Models. Johan Bollen Old Dominion University Department of Computer Science

Extended IR Models. Johan Bollen Old Dominion University Department of Computer Science Extended IR Models. Johan Bollen Old Dominion University Department of Computer Science jbollen@cs.odu.edu http://www.cs.odu.edu/ jbollen January 20, 2004 Page 1 UserTask Retrieval Classic Model Boolean

More information

Propositional Logic: Logical Agents (Part I)

Propositional Logic: Logical Agents (Part I) Propositional Logic: Logical Agents (Part I) First Lecture Today (Tue 21 Jun) Read Chapters 1 and 2 Second Lecture Today (Tue 21 Jun) Read Chapter 7.1-7.4 Next Lecture (Thu 23 Jun) Read Chapters 7.5 (optional:

More information

Structuralism and the Limits of Skepticism. David Chalmers Thalheimer Lecture 3

Structuralism and the Limits of Skepticism. David Chalmers Thalheimer Lecture 3 Structuralism and the Limits of Skepticism David Chalmers Thalheimer Lecture 3 Skepticism and Realism I Skepticism: We don t know whether external things exist Realism: External things exist Anti-Realism:

More information

2- AUTOASSOCIATIVE NET - The feedforward autoassociative net considered in this section is a special case of the heteroassociative net.

2- AUTOASSOCIATIVE NET - The feedforward autoassociative net considered in this section is a special case of the heteroassociative net. 2- AUTOASSOCIATIVE NET - The feedforward autoassociative net considered in this section is a special case of the heteroassociative net. - For an autoassociative net, the training input and target output

More information

Plan. Perceptron Linear discriminant. Associative memories Hopfield networks Chaotic networks. Multilayer perceptron Backpropagation

Plan. Perceptron Linear discriminant. Associative memories Hopfield networks Chaotic networks. Multilayer perceptron Backpropagation Neural Networks Plan Perceptron Linear discriminant Associative memories Hopfield networks Chaotic networks Multilayer perceptron Backpropagation Perceptron Historically, the first neural net Inspired

More information

Part 8: Neural Networks

Part 8: Neural Networks METU Informatics Institute Min720 Pattern Classification ith Bio-Medical Applications Part 8: Neural Netors - INTRODUCTION: BIOLOGICAL VS. ARTIFICIAL Biological Neural Netors A Neuron: - A nerve cell as

More information

What is model theory?

What is model theory? What is Model Theory? Michael Lieberman Kalamazoo College Math Department Colloquium October 16, 2013 Model theory is an area of mathematical logic that seeks to use the tools of logic to solve concrete

More information

EXTENDED FUZZY COGNITIVE MAPS

EXTENDED FUZZY COGNITIVE MAPS EXTENDED FUZZY COGNITIVE MAPS Masafumi Hagiwara Department of Psychology Stanford University Stanford, CA 930 U.S.A. Abstract Fuzzy Cognitive Maps (FCMs) have been proposed to represent causal reasoning

More information

Propositional Logic: Logical Agents (Part I)

Propositional Logic: Logical Agents (Part I) Propositional Logic: Logical Agents (Part I) This lecture topic: Propositional Logic (two lectures) Chapter 7.1-7.4 (this lecture, Part I) Chapter 7.5 (next lecture, Part II) Next lecture topic: First-order

More information

An Algebraic Generalization for Graph and Tensor-Based Neural Networks

An Algebraic Generalization for Graph and Tensor-Based Neural Networks An Algebraic Generalization for Graph and Tensor-Based Neural Networks CIBCB 2017 Manchester, U.K. Ethan C. Jackson 1, James A. Hughes 1, Mark Daley 1, and Michael Winter 2 August 24, 2017 The University

More information

Extending the Associative Rule Chaining Architecture for Multiple Arity Rules

Extending the Associative Rule Chaining Architecture for Multiple Arity Rules Extending the Associative Rule Chaining Architecture for Multiple Arity Rules Nathan Burles, James Austin, and Simon O Keefe Advanced Computer Architectures Group Department of Computer Science University

More information

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2016

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2016 Bayesian Networks: Construction, Inference, Learning and Causal Interpretation Volker Tresp Summer 2016 1 Introduction So far we were mostly concerned with supervised learning: we predicted one or several

More information

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others) Machine Learning Neural Networks (slides from Domingos, Pardo, others) For this week, Reading Chapter 4: Neural Networks (Mitchell, 1997) See Canvas For subsequent weeks: Scaling Learning Algorithms toward

More information

PV021: Neural networks. Tomáš Brázdil

PV021: Neural networks. Tomáš Brázdil 1 PV021: Neural networks Tomáš Brázdil 2 Course organization Course materials: Main: The lecture Neural Networks and Deep Learning by Michael Nielsen http://neuralnetworksanddeeplearning.com/ (Extremely

More information

CMSC 421: Neural Computation. Applications of Neural Networks

CMSC 421: Neural Computation. Applications of Neural Networks CMSC 42: Neural Computation definition synonyms neural networks artificial neural networks neural modeling connectionist models parallel distributed processing AI perspective Applications of Neural Networks

More information

The Ordinal Serial Encoding Model: Serial Memory in Spiking Neurons

The Ordinal Serial Encoding Model: Serial Memory in Spiking Neurons The Ordinal Serial Encoding Model: Serial Memory in Spiking Neurons by Feng-Xuan Choo A thesis presented to the University of Waterloo in fulfillment of the thesis requirement for the degree of Master

More information

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2014

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2014 Bayesian Networks: Construction, Inference, Learning and Causal Interpretation Volker Tresp Summer 2014 1 Introduction So far we were mostly concerned with supervised learning: we predicted one or several

More information

biologically-inspired computing lecture 5 Informatics luis rocha 2015 biologically Inspired computing INDIANA UNIVERSITY

biologically-inspired computing lecture 5 Informatics luis rocha 2015 biologically Inspired computing INDIANA UNIVERSITY lecture 5 -inspired Sections I485/H400 course outlook Assignments: 35% Students will complete 4/5 assignments based on algorithms presented in class Lab meets in I1 (West) 109 on Lab Wednesdays Lab 0 :

More information

Credit Assignment: Beyond Backpropagation

Credit Assignment: Beyond Backpropagation Credit Assignment: Beyond Backpropagation Yoshua Bengio 11 December 2016 AutoDiff NIPS 2016 Workshop oo b s res P IT g, M e n i arn nlin Le ain o p ee em : D will r G PLU ters p cha k t, u o is Deep Learning

More information

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others) Machine Learning Neural Networks (slides from Domingos, Pardo, others) Human Brain Neurons Input-Output Transformation Input Spikes Output Spike Spike (= a brief pulse) (Excitatory Post-Synaptic Potential)

More information

Basic Principles of Unsupervised and Unsupervised

Basic Principles of Unsupervised and Unsupervised Basic Principles of Unsupervised and Unsupervised Learning Toward Deep Learning Shun ichi Amari (RIKEN Brain Science Institute) collaborators: R. Karakida, M. Okada (U. Tokyo) Deep Learning Self Organization

More information

Pattern Association or Associative Networks. Jugal Kalita University of Colorado at Colorado Springs

Pattern Association or Associative Networks. Jugal Kalita University of Colorado at Colorado Springs Pattern Association or Associative Networks Jugal Kalita University of Colorado at Colorado Springs To an extent, learning is forming associations. Human memory associates similar items, contrary/opposite

More information

Learning and Memory in Neural Networks

Learning and Memory in Neural Networks Learning and Memory in Neural Networks Guy Billings, Neuroinformatics Doctoral Training Centre, The School of Informatics, The University of Edinburgh, UK. Neural networks consist of computational units

More information

ARTIFICIAL INTELLIGENCE. Artificial Neural Networks

ARTIFICIAL INTELLIGENCE. Artificial Neural Networks INFOB2KI 2017-2018 Utrecht University The Netherlands ARTIFICIAL INTELLIGENCE Artificial Neural Networks Lecturer: Silja Renooij These slides are part of the INFOB2KI Course Notes available from www.cs.uu.nl/docs/vakken/b2ki/schema.html

More information

Artificial Neural Networks D B M G. Data Base and Data Mining Group of Politecnico di Torino. Elena Baralis. Politecnico di Torino

Artificial Neural Networks D B M G. Data Base and Data Mining Group of Politecnico di Torino. Elena Baralis. Politecnico di Torino Artificial Neural Networks Data Base and Data Mining Group of Politecnico di Torino Elena Baralis Politecnico di Torino Artificial Neural Networks Inspired to the structure of the human brain Neurons as

More information

Neural Networks. Associative memory 12/30/2015. Associative memories. Associative memories

Neural Networks. Associative memory 12/30/2015. Associative memories. Associative memories //5 Neural Netors Associative memory Lecture Associative memories Associative memories The massively parallel models of associative or content associative memory have been developed. Some of these models

More information

Other Problems in Philosophy and Physics

Other Problems in Philosophy and Physics 384 Free Will: The Scandal in Philosophy Illusionism Determinism Hard Determinism Compatibilism Soft Determinism Hard Incompatibilism Impossibilism Valerian Model Soft Compatibilism Other Problems in Philosophy

More information

COMP304 Introduction to Neural Networks based on slides by:

COMP304 Introduction to Neural Networks based on slides by: COMP34 Introduction to Neural Networks based on slides by: Christian Borgelt http://www.borgelt.net/ Christian Borgelt Introduction to Neural Networks Motivation: Why (Artificial) Neural Networks? (Neuro-)Biology

More information

Neural Networks: Introduction

Neural Networks: Introduction Neural Networks: Introduction Machine Learning Fall 2017 Based on slides and material from Geoffrey Hinton, Richard Socher, Dan Roth, Yoav Goldberg, Shai Shalev-Shwartz and Shai Ben-David, and others 1

More information

Signal, donnée, information dans les circuits de nos cerveaux

Signal, donnée, information dans les circuits de nos cerveaux NeuroSTIC Brest 5 octobre 2017 Signal, donnée, information dans les circuits de nos cerveaux Claude Berrou Signal, data, information: in the field of telecommunication, everything is clear It is much less

More information

Probabilistic Graphical Models

Probabilistic Graphical Models Probabilistic Graphical Models Introduction. Basic Probability and Bayes Volkan Cevher, Matthias Seeger Ecole Polytechnique Fédérale de Lausanne 26/9/2011 (EPFL) Graphical Models 26/9/2011 1 / 28 Outline

More information

Generative Models for Sentences

Generative Models for Sentences Generative Models for Sentences Amjad Almahairi PhD student August 16 th 2014 Outline 1. Motivation Language modelling Full Sentence Embeddings 2. Approach Bayesian Networks Variational Autoencoders (VAE)

More information

Approximate Inference Part 1 of 2

Approximate Inference Part 1 of 2 Approximate Inference Part 1 of 2 Tom Minka Microsoft Research, Cambridge, UK Machine Learning Summer School 2009 http://mlg.eng.cam.ac.uk/mlss09/ Bayesian paradigm Consistent use of probability theory

More information

Parallelism and Machine Models

Parallelism and Machine Models Parallelism and Machine Models Andrew D Smith University of New Brunswick, Fredericton Faculty of Computer Science Overview Part 1: The Parallel Computation Thesis Part 2: Parallelism of Arithmetic RAMs

More information

CSC321 Lecture 18: Learning Probabilistic Models

CSC321 Lecture 18: Learning Probabilistic Models CSC321 Lecture 18: Learning Probabilistic Models Roger Grosse Roger Grosse CSC321 Lecture 18: Learning Probabilistic Models 1 / 25 Overview So far in this course: mainly supervised learning Language modeling

More information

Natural Language Processing : Probabilistic Context Free Grammars. Updated 5/09

Natural Language Processing : Probabilistic Context Free Grammars. Updated 5/09 Natural Language Processing : Probabilistic Context Free Grammars Updated 5/09 Motivation N-gram models and HMM Tagging only allowed us to process sentences linearly. However, even simple sentences require

More information

Lecture 15: Exploding and Vanishing Gradients

Lecture 15: Exploding and Vanishing Gradients Lecture 15: Exploding and Vanishing Gradients Roger Grosse 1 Introduction Last lecture, we introduced RNNs and saw how to derive the gradients using backprop through time. In principle, this lets us train

More information

Approximate Inference Part 1 of 2

Approximate Inference Part 1 of 2 Approximate Inference Part 1 of 2 Tom Minka Microsoft Research, Cambridge, UK Machine Learning Summer School 2009 http://mlg.eng.cam.ac.uk/mlss09/ 1 Bayesian paradigm Consistent use of probability theory

More information

Lecture 6: Neural Networks for Representing Word Meaning

Lecture 6: Neural Networks for Representing Word Meaning Lecture 6: Neural Networks for Representing Word Meaning Mirella Lapata School of Informatics University of Edinburgh mlap@inf.ed.ac.uk February 7, 2017 1 / 28 Logistic Regression Input is a feature vector,

More information

Propositions. c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 5.1, Page 1

Propositions. c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 5.1, Page 1 Propositions An interpretation is an assignment of values to all variables. A model is an interpretation that satisfies the constraints. Often we don t want to just find a model, but want to know what

More information

Summer School on Graphs in Computer Graphics, Image and Signal Analysis Bornholm, Denmark, August 2011

Summer School on Graphs in Computer Graphics, Image and Signal Analysis Bornholm, Denmark, August 2011 Summer School on Graphs in Computer Graphics, Image and Signal Analysis Bornholm, Denmark, August 2011 1 Succinct Games Describing a game in normal form entails listing all payoffs for all players and

More information

Multilayer Perceptron Tutorial

Multilayer Perceptron Tutorial Multilayer Perceptron Tutorial Leonardo Noriega School of Computing Staffordshire University Beaconside Staffordshire ST18 0DG email: l.a.noriega@staffs.ac.uk November 17, 2005 1 Introduction to Neural

More information

Introduction to Neural Networks

Introduction to Neural Networks Introduction to Neural Networks Vincent Barra LIMOS, UMR CNRS 6158, Blaise Pascal University, Clermont-Ferrand, FRANCE January 4, 2011 1 / 46 1 INTRODUCTION Introduction History Brain vs. ANN Biological

More information

Bayesian Machine Learning

Bayesian Machine Learning Bayesian Machine Learning Andrew Gordon Wilson ORIE 6741 Lecture 4 Occam s Razor, Model Construction, and Directed Graphical Models https://people.orie.cornell.edu/andrew/orie6741 Cornell University September

More information

Geometric Analogue of Holographic Reduced Representations

Geometric Analogue of Holographic Reduced Representations Geometric Analogue of Holographic Reduced Representations Agnieszka Patyk Ph.D. Supervisor: prof. Marek Czachor Faculty of Applied Physics and Mathematics, Gdańsk University of Technology, Poland and Centrum

More information

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD WHAT IS A NEURAL NETWORK? The simplest definition of a neural network, more properly referred to as an 'artificial' neural network (ANN), is provided

More information

Staying on the path Fred Rothganger

Staying on the path Fred Rothganger Staying on the path Fred Rothganger Sandia National Laboratories is a multi-mission laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for

More information

SGD and Deep Learning

SGD and Deep Learning SGD and Deep Learning Subgradients Lets make the gradient cheating more formal. Recall that the gradient is the slope of the tangent. f(w 1 )+rf(w 1 ) (w w 1 ) Non differentiable case? w 1 Subgradients

More information

4 : Exact Inference: Variable Elimination

4 : Exact Inference: Variable Elimination 10-708: Probabilistic Graphical Models 10-708, Spring 2014 4 : Exact Inference: Variable Elimination Lecturer: Eric P. ing Scribes: Soumya Batra, Pradeep Dasigi, Manzil Zaheer 1 Probabilistic Inference

More information

Artificial Intelligence Hopfield Networks

Artificial Intelligence Hopfield Networks Artificial Intelligence Hopfield Networks Andrea Torsello Network Topologies Single Layer Recurrent Network Bidirectional Symmetric Connection Binary / Continuous Units Associative Memory Optimization

More information

Neural Networks 1 Synchronization in Spiking Neural Networks

Neural Networks 1 Synchronization in Spiking Neural Networks CS 790R Seminar Modeling & Simulation Neural Networks 1 Synchronization in Spiking Neural Networks René Doursat Department of Computer Science & Engineering University of Nevada, Reno Spring 2006 Synchronization

More information

Deduction by Daniel Bonevac. Chapter 3 Truth Trees

Deduction by Daniel Bonevac. Chapter 3 Truth Trees Deduction by Daniel Bonevac Chapter 3 Truth Trees Truth trees Truth trees provide an alternate decision procedure for assessing validity, logical equivalence, satisfiability and other logical properties

More information

Networks and sciences: The story of the small-world

Networks and sciences: The story of the small-world Networks and sciences: The story of the small-world Hugues Bersini IRIDIA ULB 2013 Networks and sciences 1 The story begins with Stanley Milgram (1933-1984) In 1960, the famous experience of the submission

More information

Reification of Boolean Logic

Reification of Boolean Logic 526 U1180 neural networks 1 Chapter 1 Reification of Boolean Logic The modern era of neural networks began with the pioneer work of McCulloch and Pitts (1943). McCulloch was a psychiatrist and neuroanatomist;

More information

BOOLEAN MATRIX AND TENSOR DECOMPOSITIONS. Pauli Miettinen TML September 2013

BOOLEAN MATRIX AND TENSOR DECOMPOSITIONS. Pauli Miettinen TML September 2013 BOOLEAN MATRIX AND TENSOR DECOMPOSITIONS Pauli Miettinen TML 2013 27 September 2013 BOOLEAN MATRIX AND TENSOR DECOMPOSITIONS Boolean decompositions are like normal decompositions, except that Input is

More information

Dynamic Working Memory in Recurrent Neural Networks

Dynamic Working Memory in Recurrent Neural Networks Dynamic Working Memory in Recurrent Neural Networks Alexander Atanasov Research Advisor: John Murray Physics 471 Fall Term, 2016 Abstract Recurrent neural networks (RNNs) are physically-motivated models

More information

Deep Belief Networks are compact universal approximators

Deep Belief Networks are compact universal approximators 1 Deep Belief Networks are compact universal approximators Nicolas Le Roux 1, Yoshua Bengio 2 1 Microsoft Research Cambridge 2 University of Montreal Keywords: Deep Belief Networks, Universal Approximation

More information

2 : Directed GMs: Bayesian Networks

2 : Directed GMs: Bayesian Networks 10-708: Probabilistic Graphical Models 10-708, Spring 2017 2 : Directed GMs: Bayesian Networks Lecturer: Eric P. Xing Scribes: Jayanth Koushik, Hiroaki Hayashi, Christian Perez Topic: Directed GMs 1 Types

More information

Hebbian Learning II. Robert Jacobs Department of Brain & Cognitive Sciences University of Rochester. July 20, 2017

Hebbian Learning II. Robert Jacobs Department of Brain & Cognitive Sciences University of Rochester. July 20, 2017 Hebbian Learning II Robert Jacobs Department of Brain & Cognitive Sciences University of Rochester July 20, 2017 Goals Teach about one-half of an undergraduate course on Linear Algebra Understand when

More information

Neural Networks. Mark van Rossum. January 15, School of Informatics, University of Edinburgh 1 / 28

Neural Networks. Mark van Rossum. January 15, School of Informatics, University of Edinburgh 1 / 28 1 / 28 Neural Networks Mark van Rossum School of Informatics, University of Edinburgh January 15, 2018 2 / 28 Goals: Understand how (recurrent) networks behave Find a way to teach networks to do a certain

More information

Machine Learning (CSE 446): Backpropagation

Machine Learning (CSE 446): Backpropagation Machine Learning (CSE 446): Backpropagation Noah Smith c 2017 University of Washington nasmith@cs.washington.edu November 8, 2017 1 / 32 Neuron-Inspired Classifiers correct output y n L n loss hidden units

More information

Artificial Neural Network and Fuzzy Logic

Artificial Neural Network and Fuzzy Logic Artificial Neural Network and Fuzzy Logic 1 Syllabus 2 Syllabus 3 Books 1. Artificial Neural Networks by B. Yagnanarayan, PHI - (Cover Topologies part of unit 1 and All part of Unit 2) 2. Neural Networks

More information

Overview. Systematicity, connectionism vs symbolic AI, and universal properties. What is systematicity? Are connectionist AI systems systematic?

Overview. Systematicity, connectionism vs symbolic AI, and universal properties. What is systematicity? Are connectionist AI systems systematic? Overview Systematicity, connectionism vs symbolic AI, and universal properties COMP9844-2013s2 Systematicity divides cognitive capacities (human or machine) into equivalence classes. Standard example:

More information