Network Computing and State Space Semantics versus Symbol Manipulation and Sentential Representation

Size: px
Start display at page:

Download "Network Computing and State Space Semantics versus Symbol Manipulation and Sentential Representation"

Transcription

1 Network Computing and State Space Semantics versus Symbol Manipulation and Sentential Representation Brain Unlike Turing Machine/Symbol Manipulator 1. Nervous system has massively parallel architecture different signals processed in millions of pathways simultaneously (Parallel Distributed Processing (PDP)) this makes for very fast response to certain kinds of complex computational problems (perceptual discrimination, motor coordination, speech recognition/production) see also 3. ( Could a Machine Think? p35-36 (56-59), M&C p154) 2. Brian s basic processing unit (neuron) is simple and analog ( Think? p35 (56)) 3. The hypothesis is that the brain represents in something like the way network systems are thought to represent. 4. Recurrent nature of many portions of the brain axons connecting one set of neurons to a second are matched by another set of axons projecting back to the first population. 5. Fault and damage tolerant cognitive processing is often not significantly affected by minor cell damage and scattered cell death, even when it gets to such a degree as to affect processing, processing quality degrades gradually as opposed to dropping off sharply this is, in part, a result of the parallel, distributed nature of the brain. Reliable even on degraded input 6. Brains are plastic transformational properties can be reconfigured this allows for change and learning and for certain parts of the brain to adapt to damage in other areas (fault tolerance again) 7. Not at all clear that cognitive processing is entirely or even mostly rule governed symbol manipulation. The physical structure suggests that it may be vector transformation. Network Systems Like Brains 1. Implement PDP 2. Processing unit simple, can be analog 3. Distributed representation and computation massively parallel structure means that i) long term representations are stored in a distributed manner across the network (the configuration of connection weights), this allows for quick access of relevant info; ii) short term representations are encoded in the activation vectors (activation levels or spiking frequencies) of a number (often very large) of neurons, and iii) transformations of activation vectors are carried out in parallel by a number (often very large) of units (M&C , Think? pp36-37 (56-59)) this related to 1. and 5. and A network can be recurrent, this i) allows it to modulate information processing in light of the immediate informational context, and ii) suggests that the brain is a complex dynamical system, the behavior of which is to some degree independent of its stimuli ( Think? p35 (56)) 5. The parallel, distributed nature of the computational and representational architecture in networks also exhibits these properties. ( Think? p37 (59) M&C p154) 6. Networks can also be plastic 7. The basic mode of operation is NOT rule governed symbol manipulation, but vector transformation 1

2 The claim is not that classical TM/SMs (or some slight modification) cannot have any of the above properties, but that networks (and, it is supposed, brains) naturally display these properties in virtue of their basic functional architecture, whereas an SM would have to be specially constructed and programmed to display these properties (also, a classical approach is going to have speed issues). Moreover, the Churchlands see the greatest prospect of generating artificial intelligence, and the greatest prospect of understanding terrestrial intelligence in general, in the empirical investigation of brains and network systems. Remember, however, that they make this not as an a priori claim, but as an empirical, and therefore falsifiable, claim. Some Key Concepts in Network Computation and Representation: Vector: A vector is just an ordered n-tuple of n quantities (e.g., <.2,.7,.5,.9>, is an ordered quadruple), which signifies the quantities of n (four) variables. A vector can be represented either as an n-tuple or as a point in an n- dimensional state-space. State-Space: an n-dimensional system of coordinates for representing information encoded as n-dimensional vectors. For instance, suppose we wanted a state space for encoding information about cars. We might start with a simple 1-dimensional space (a number line), and we could represent the model year of any given car as a point on that line. Next, suppose we want to include the Blue Book value of the car. We simply add a second dimension, giving us a 2-dimensional space (a Cartesian plane), and we encode the year and value as a point in that space which corresponds to the ordered pair of numbers for those quantities <2000, 17500>. (Note that the two scales need not be identical.) If we add average miles per gallon on the highway, we get a cube, and each car can be represented by a point in that cube space. If we add average MPG for city driving, we now have a 4-d solid (a hypercube here is where most of us start having difficulty visualizing, but that doesn t really matter). We can add dimensions for gross weight, safety ratings, reliability ratings, passenger capacity and cargo capacity, current mileage, repair and maintenance record, essentially whatever we think of. This is not a literal space, but a informational space it allows us to represent real properties of objects and their interrelations. Given that it is a geometrical mode of representation we can expect to find important and revealing relations of distance, proximity, and betweeness; important and revealing significance to certain sub-spaces (regions) within the overall state-space; or important and revealing significance to how a the position of a point or solid evolves over time (e.g., as age and mileage increase MPG ratings are likely to decrease, though less quickly for the more reliable cars with better maintenance and repair records...). Network: A system consisting of 2 or more layers of nodes, with each layer consisting of varying numbers of nodes. Individually, these nodes perform some simple (or in some cases more complex) arithmetical operations on their input and produce an output. The first layer, or input layer (usually at the bottom of the diagram), accepts a numerically encoded input an input vector; usually a series of quantities each of which ranges from 0 to 1, each node in the layer is fed a number. The final layer, or output layer, delivers a similar output vector (one number for each node in the output layer), which may then be interpreted by human users (Mine/Rock), become the input for other networks, or drive a motor system. The output layer nodes and those in any hidden layers (layers between the input and output) simply sum the weighted inputs from the nodes of the previous layer, the weighting of the input being a function of the connection weight between the nodes of the two layers. This results in a vector transformation (a change in the quantities and (possibly) the dimensionality of the vector), the basic form of computation in a network system. Each node at layer i > 1 then produces an output which is a non-linear function of its activation level (see NCP pp161, 172). If there are one or more hidden layers the varying connection weights between these are responsible for both the storage of long term representations, and the transformation of activation vectors. Activation Vector: An n-place vector representing the activation levels of n nodes in a network layer. Can be represented as an ordered n-tuple or as a point in a state-space (called an activation-vector space). Activation vectors can be interpreted as occurrent representations (e.g., of a particular taste, position of a limb, discrimination of an object) and regions of activation-vector space can be interpreted as prototype representations. Continuous occurrent representation (e.g., motion of a limb, motion of an object in a visual field, parsing of sentence structure) is understood as the motion of a point in activation space (as the activation levels in a layer evolve over time as a function of the changing input). 2

3 Connection Weights: Each node in layer i > 1 is input the value of each node from layer i - 1, and each layer i node sums these values. This would lead to each node in layer i having the same value (the sum of the layer i - 1 values), except that each input to an individual node in layer i is weighted by being multiplied by some number (the weight of that connection). (Also, weightings can be more complex functions, but we ll keep it simple.) The inputs from each layer i - 1 node receive different weightings as they arrive at different layer i nodes, and, as a result, any vector transformation can be computed. Each node at layer i then produces an output which is a non-linear function of its activation level (see NCP pp161, 172). Even a very simple three-layer network can compute complex vector transformations. The more nodes at each given layer, the higher the dimensionality of the activation vector and the more connections and weightings in the system, the more fine-grained the computational power. The connection weights can also be represented as a vector (a weight-vector), and so a particular configuration of weights for a layer in a network can be represented as a point in a high-dimensional state-space (a weight space). It is in the weight configuration that longer-term information is represented, for it is the weight vector which determines how the activation vectors are transformed. Learning, or training, is the adjustment of connection weights. Hence, change in the long term representation can be depicted as the evolution of a point through weight space. Training up: The process by which an artificial network is taught, or learns, or is trained for a particular task. This involves starting the network out with random connection-weightings, feeding the network input vectors for which the desired output vector is already known, and slightly adjusting the weightings in response to the degree of error in the actual output. (See NCP pp on the generalized delta rule.) Eventually, the weightings in the system settle into a configuration in which the degree of error is very small for the training set. The network can then produce reliable and correct output for the task on new sets of inputs. In some sense the network learns to discriminate mines from rocks, discriminate faces, colors, catch a ball, walk, parse a sentence, add numbers Example of vector transformation mathematics: Suppose we have a network like the one on the next page (M&C Fig. 7.17, OTC Fig. 5.3, NCP Fig. 9.4) with 4 nodes in the input layer, 3 in the hidden layer, and 4 in the output layer. Label the input nodes a, b, c, d. Label the 3 hidden layer nodes x, y, z. Each node (take x for example) has four weighted connections, one each from a, b, c, and d; label each of these connection weights x a, x b, x c, x d, and so for nodes y and z. Then the vector transformation yielding the activation vector for the hidden nodes x, y, z is calculated by matrix multiplication: < a, b, c, d > = xa xb xc xd < x, y y y y a b c d y, za z b z c zd z > this means multiply a times each of x a, y a, z a, b times x b, y b, z b, similarly for c and d, then sum the x columns to get the value for x, sum the y column and the z column, yielding the final (now 3-tuple) vector <x, y, z>. This is the activation vector and it is a straightforward mathematical function of the input vector < a, b, c, d> and the connection weights x a, y a, z a, x b, y b, z b, etc. A similar explanation appears with figure 5.8 from NCP. 3

4 4

5 5

6 6

7 7

8 8

9 9

10 Philosophy of Science: Sentential View of Theories Formalism of propositions and inference rules has immense difficulty accounting for rationality of theory development and testing o problems of induction (description & justification) o paradoxes of confirmation, indeterminacy of falsification o nature and rationality of laws Rationality of large scale conceptual change extremely problematic No good theory of learning (see center & right) Frame Problem (see center & right) Notion of simplicity seen as important, but its nature and a metric for measuring it left unexplained Poor understanding of perceptual knowledge and the theory laden nature of observation Kuhnian paradigms show how important non-linguistic elements are, but sentential view fails to address this in an integrated fashion such things as skill at experimental set-up and interpretation; variance in experimenters attitudes and responses to anomaly; the nature, difficulty, and rationality of conceptual change Nature of explanation problematic No contact with microstructure of brain, no explanation of how sentential representations implemented in thinkers Problems with Sentential Representation as Basic to Cognition Philosophy of Mind: FP account of Individual Representation and Inference Explanatory gaps between o human and non-human representation o representation in verbal and non-verbal children o sensory perception/imagination and thought as talking to oneself, including characterization of perceptual knowledge Learning in general poorly understood on the sentential paradigm Learning of a language must be explained in terms of the formulation and testing of hypotheses, but this either leads to regress or requires a pre-existing (innate) language plausibility of this highly problematic Difficulty of accounting for role of o background knowledge, esp. quick and relevant retrieval thereof; the Frame Problem (see right) o creative thought o context sensitivity o analogy, metaphor Difficulty accounting for nature of concepts o concepts as linguistically encoded necessary and sufficient conditions has various conceptual problems including: initial implausibility, implausibility of innateness requirement o prototype view of concepts more plausible, but understanding prototypes as linguistically encoded has difficulties: description of relevant prototype, similarity conditions and metric on similarity No contact with microstructure of brain Artificial Intelligence: Symbol Manipulator/Turing Machine approach Difficulty of dealing with background knowledge o storing a list of linguistically structured elements inefficient and implausible o the Frame Problem: ability to store and relevantly draw on background knowledge quickly and as needed very poor (exhaustive search: slow; relevant search: need to encode and search relevance conditions leads to regress and increase in storage costs Signal propagation in a computer a million times faster than in the brain, clock frequency of CPUs faster than any frequency in the brain, YET on many realistic tasks confronted by biological creatures (perception, motor control, speech recognition, etc.) the brain is far faster than a symbol manipulator Difficulty in accounting for learning and conceptual change (see center & left) No contact with microstructure of brain (see above )

11 State Space Semantics A View of Representation Based on Neurological and Network Computational Insights: Most basically, occurent representations can be understood as points, trajectories, volumes in the activation-vector space of a system; longer-term representations can be understood as points in the weight space. Transformations on representations ( inference, computation ) can be understood as vector transformations in a network system. The role of these representations and transformations is a result of the particular configuration of connection weights in the system, for the weight configuration partitions the activation-vector space into various sub-volumes and prototype regions. These are the longer-term representations. Ongoing activation follows a trajectory through that space. Since the configuration of weights can be represented as a point in the system s overall weight-space, we can consider the various points in weight space as various theories. Hence, learning and conceptual change can be accounted for as the evolution, over time, of the point through weight space i.e., the gradual adjustments of the system s connection weights, where this is carried out by some form of supervised or unsupervised learning process individual learning of the slow and structural kind Learning, especially after the initial connection weights are set in youth, can also be accounted for as the redeployment of conceptual resources some subset of prototypes and their internal structure can be redeployed to handle a new domain of phenomena. This is aided by the recurrent pathways in recurrent networks individual learning of the fast and dynamical kind Learning of language no longer needs to presuppose a pre-existing language, there is a more fundamental mode of representation and learning already in place Offers a more general and more fundamental mode of representation and transformation of representations, which may well apply across species and is able to ground other forms of representation, including symbolic and linguistically structured representation This does not produce any explanatory gaps, and indeed has the potential to show continuity where FP perceives gaps This suggests that the sentential view of theories and individual representation as well as inductive inference is highly superficial, no wonder it fails to explain our most complex cognition and most significant periods of (individual and social) conceptual change Nature of perceptual input to cognition clarified, no cognition without some particular weight configuration, all observation theory laden Background knowledge is not stored in list of sentences, and so the relevance search problem is avoided indeed, the relevant background knowledge is distributed across the network in the weightings, it does not need to be accessed, the vectors evolve over time simply in virtue of how the activation space is partitioned, so there is no problem, background info has an immediate and relevant effect on cognition/processing Context sensitivity is accounted for in a similar way, especially for recurrent networks those in which projections from some nodes/neurons reach back to connect to nodes earlier in the information flow. In such networks, recently processed information can have an immediate and ongoing modulating effect on incoming information. Holistic and Kuhnian insights seen to be consequences of the style of representation and cognition, including the nature and difficulty of paradigm shifts (escaping local error minima) the importance of situated knowledge (knowledge regarding one s bodily position and purposes in a spatio-temporal world) Yields powerful responses to/views of: problems of induction, natures and roles of explanation, inference to the best explanation, simplicity, analogy Massive parallel processing etc. makes contact with brain microstructure... see above...

12 Works Cited: NP Think? OTC M&C Churchland, Paul M. (1989) A Neurocomputational Perspective: The Nature of Mind and the Structure of Science. Cambridge, Mass.: The MIT Press. Churchland, Paul M. and Churchland, Patricia Smith. (1990) Could a Machine Think? Scientific American 262 (1): Churchland, Paul M. and Churchland, Patricia Smith. (1998) On the Contrary: Critical Essays, Cambridge, Mass.: The MIT Press. Churchland, Paul M. (1988) Matter and Consciousness: A Contemporary Introduction to the Philosophy of Mind. Revised Edition. Cambridge, Mass.: The MIT Press. 12

Data Mining Part 5. Prediction

Data Mining Part 5. Prediction Data Mining Part 5. Prediction 5.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline How the Brain Works Artificial Neural Networks Simple Computing Elements Feed-Forward Networks Perceptrons (Single-layer,

More information

Some Thoughts on Computation and Simulation in Cognitive Science

Some Thoughts on Computation and Simulation in Cognitive Science Some Thoughts on Computation and Simulation in Cognitive Science Markus Peschl ( * ) and Matthias Scheutz ( ** ) ( * ) Department of Philosophy of Science and Social Studies of Science University of Vienna

More information

22c145-Fall 01: Neural Networks. Neural Networks. Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1

22c145-Fall 01: Neural Networks. Neural Networks. Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1 Neural Networks Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1 Brains as Computational Devices Brains advantages with respect to digital computers: Massively parallel Fault-tolerant Reliable

More information

Math Fundamentals for Statistics I (Math 52) Unit 7: Connections (Graphs, Equations and Inequalities)

Math Fundamentals for Statistics I (Math 52) Unit 7: Connections (Graphs, Equations and Inequalities) Math Fundamentals for Statistics I (Math 52) Unit 7: Connections (Graphs, Equations and Inequalities) By Scott Fallstrom and Brent Pickett The How and Whys Guys This work is licensed under a Creative Commons

More information

EE04 804(B) Soft Computing Ver. 1.2 Class 2. Neural Networks - I Feb 23, Sasidharan Sreedharan

EE04 804(B) Soft Computing Ver. 1.2 Class 2. Neural Networks - I Feb 23, Sasidharan Sreedharan EE04 804(B) Soft Computing Ver. 1.2 Class 2. Neural Networks - I Feb 23, 2012 Sasidharan Sreedharan www.sasidharan.webs.com 3/1/2012 1 Syllabus Artificial Intelligence Systems- Neural Networks, fuzzy logic,

More information

ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92

ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92 ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92 BIOLOGICAL INSPIRATIONS Some numbers The human brain contains about 10 billion nerve cells (neurons) Each neuron is connected to the others through 10000

More information

Experiments on the Consciousness Prior

Experiments on the Consciousness Prior Yoshua Bengio and William Fedus UNIVERSITÉ DE MONTRÉAL, MILA Abstract Experiments are proposed to explore a novel prior for representation learning, which can be combined with other priors in order to

More information

CS:4420 Artificial Intelligence

CS:4420 Artificial Intelligence CS:4420 Artificial Intelligence Spring 2018 Neural Networks Cesare Tinelli The University of Iowa Copyright 2004 18, Cesare Tinelli and Stuart Russell a a These notes were originally developed by Stuart

More information

Using Variable Threshold to Increase Capacity in a Feedback Neural Network

Using Variable Threshold to Increase Capacity in a Feedback Neural Network Using Variable Threshold to Increase Capacity in a Feedback Neural Network Praveen Kuruvada Abstract: The article presents new results on the use of variable thresholds to increase the capacity of a feedback

More information

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others) Machine Learning Neural Networks (slides from Domingos, Pardo, others) Human Brain Neurons Input-Output Transformation Input Spikes Output Spike Spike (= a brief pulse) (Excitatory Post-Synaptic Potential)

More information

Neural Networks: Introduction

Neural Networks: Introduction Neural Networks: Introduction Machine Learning Fall 2017 Based on slides and material from Geoffrey Hinton, Richard Socher, Dan Roth, Yoav Goldberg, Shai Shalev-Shwartz and Shai Ben-David, and others 1

More information

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann (Feed-Forward) Neural Networks 2016-12-06 Dr. Hajira Jabeen, Prof. Jens Lehmann Outline In the previous lectures we have learned about tensors and factorization methods. RESCAL is a bilinear model for

More information

Reservoir Computing and Echo State Networks

Reservoir Computing and Echo State Networks An Introduction to: Reservoir Computing and Echo State Networks Claudio Gallicchio gallicch@di.unipi.it Outline Focus: Supervised learning in domain of sequences Recurrent Neural networks for supervised

More information

Sections 18.6 and 18.7 Artificial Neural Networks

Sections 18.6 and 18.7 Artificial Neural Networks Sections 18.6 and 18.7 Artificial Neural Networks CS4811 - Artificial Intelligence Nilufer Onder Department of Computer Science Michigan Technological University Outline The brain vs artifical neural networks

More information

Mappings For Cognitive Semantic Interoperability

Mappings For Cognitive Semantic Interoperability Mappings For Cognitive Semantic Interoperability Martin Raubal Institute for Geoinformatics University of Münster, Germany raubal@uni-muenster.de SUMMARY Semantic interoperability for geographic information

More information

Cognitive semantics and cognitive theories of representation: Session 6: Conceptual spaces

Cognitive semantics and cognitive theories of representation: Session 6: Conceptual spaces Cognitive semantics and cognitive theories of representation: Session 6: Conceptual spaces Martin Takáč Centre for cognitive science DAI FMFI Comenius University in Bratislava Príprava štúdia matematiky

More information

7.1 Basis for Boltzmann machine. 7. Boltzmann machines

7.1 Basis for Boltzmann machine. 7. Boltzmann machines 7. Boltzmann machines this section we will become acquainted with classical Boltzmann machines which can be seen obsolete being rarely applied in neurocomputing. It is interesting, after all, because is

More information

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others) Machine Learning Neural Networks (slides from Domingos, Pardo, others) For this week, Reading Chapter 4: Neural Networks (Mitchell, 1997) See Canvas For subsequent weeks: Scaling Learning Algorithms toward

More information

Sections 18.6 and 18.7 Artificial Neural Networks

Sections 18.6 and 18.7 Artificial Neural Networks Sections 18.6 and 18.7 Artificial Neural Networks CS4811 - Artificial Intelligence Nilufer Onder Department of Computer Science Michigan Technological University Outline The brain vs. artifical neural

More information

Artificial Neural Network Method of Rock Mass Blastability Classification

Artificial Neural Network Method of Rock Mass Blastability Classification Artificial Neural Network Method of Rock Mass Blastability Classification Jiang Han, Xu Weiya, Xie Shouyi Research Institute of Geotechnical Engineering, Hohai University, Nanjing, Jiangshu, P.R.China

More information

AI Programming CS F-20 Neural Networks

AI Programming CS F-20 Neural Networks AI Programming CS662-2008F-20 Neural Networks David Galles Department of Computer Science University of San Francisco 20-0: Symbolic AI Most of this class has been focused on Symbolic AI Focus or symbols

More information

Deep Learning: Self-Taught Learning and Deep vs. Shallow Architectures. Lecture 04

Deep Learning: Self-Taught Learning and Deep vs. Shallow Architectures. Lecture 04 Deep Learning: Self-Taught Learning and Deep vs. Shallow Architectures Lecture 04 Razvan C. Bunescu School of Electrical Engineering and Computer Science bunescu@ohio.edu Self-Taught Learning 1. Learn

More information

Philosophy of Science: Models in Science

Philosophy of Science: Models in Science Philosophy of Science: Models in Science Kristina Rolin 2012 Questions What is a scientific theory and how does it relate to the world? What is a model? How do models differ from theories and how do they

More information

A feasible path to a Turing machine

A feasible path to a Turing machine A feasible path to a Turing machine The recent progress in AI has been phenomenal. This has been made possible by the advent of faster processing units and better algorithms. Despite this rise in the state

More information

Extended IR Models. Johan Bollen Old Dominion University Department of Computer Science

Extended IR Models. Johan Bollen Old Dominion University Department of Computer Science Extended IR Models. Johan Bollen Old Dominion University Department of Computer Science jbollen@cs.odu.edu http://www.cs.odu.edu/ jbollen January 20, 2004 Page 1 UserTask Retrieval Classic Model Boolean

More information

Brains and Computation

Brains and Computation 15-883: Computational Models of Neural Systems Lecture 1.1: Brains and Computation David S. Touretzky Computer Science Department Carnegie Mellon University 1 Models of the Nervous System Hydraulic network

More information

SGD and Deep Learning

SGD and Deep Learning SGD and Deep Learning Subgradients Lets make the gradient cheating more formal. Recall that the gradient is the slope of the tangent. f(w 1 )+rf(w 1 ) (w w 1 ) Non differentiable case? w 1 Subgradients

More information

Artificial Neural Network and Fuzzy Logic

Artificial Neural Network and Fuzzy Logic Artificial Neural Network and Fuzzy Logic 1 Syllabus 2 Syllabus 3 Books 1. Artificial Neural Networks by B. Yagnanarayan, PHI - (Cover Topologies part of unit 1 and All part of Unit 2) 2. Neural Networks

More information

Neural Networks Introduction

Neural Networks Introduction Neural Networks Introduction H.A Talebi Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Winter 2011 H. A. Talebi, Farzaneh Abdollahi Neural Networks 1/22 Biological

More information

Vector Spaces. Chapter 1

Vector Spaces. Chapter 1 Chapter 1 Vector Spaces Linear algebra is the study of linear maps on finite-dimensional vector spaces. Eventually we will learn what all these terms mean. In this chapter we will define vector spaces

More information

Structure learning in human causal induction

Structure learning in human causal induction Structure learning in human causal induction Joshua B. Tenenbaum & Thomas L. Griffiths Department of Psychology Stanford University, Stanford, CA 94305 jbt,gruffydd @psych.stanford.edu Abstract We use

More information

September 16, 2004 The NEURON Book: Chapter 2

September 16, 2004 The NEURON Book: Chapter 2 Chapter 2 The ing perspective This and the following chapter deal with concepts that are not NEURON-specific but instead pertain equally well to any tools used for neural ing. Why? In order to achieve

More information

Lecture 4: Feed Forward Neural Networks

Lecture 4: Feed Forward Neural Networks Lecture 4: Feed Forward Neural Networks Dr. Roman V Belavkin Middlesex University BIS4435 Biological neurons and the brain A Model of A Single Neuron Neurons as data-driven models Neural Networks Training

More information

Analysis of Multilayer Neural Network Modeling and Long Short-Term Memory

Analysis of Multilayer Neural Network Modeling and Long Short-Term Memory Analysis of Multilayer Neural Network Modeling and Long Short-Term Memory Danilo López, Nelson Vera, Luis Pedraza International Science Index, Mathematical and Computational Sciences waset.org/publication/10006216

More information

Introduction to Neural Networks

Introduction to Neural Networks Introduction to Neural Networks What are (Artificial) Neural Networks? Models of the brain and nervous system Highly parallel Process information much more like the brain than a serial computer Learning

More information

Yet Another Objection to Fading and Dancing Qualia (5th draft)

Yet Another Objection to Fading and Dancing Qualia (5th draft) Yet Another Objection to Fading and Dancing Qualia (5th draft) Nir Aides nir@winpdb.org In this paper I present objections to the Fading Qualia and Dancing Qualia thought experiments, which David Chalmers

More information

Machine Learning. Neural Networks

Machine Learning. Neural Networks Machine Learning Neural Networks Bryan Pardo, Northwestern University, Machine Learning EECS 349 Fall 2007 Biological Analogy Bryan Pardo, Northwestern University, Machine Learning EECS 349 Fall 2007 THE

More information

Preferred Mental Models in Qualitative Spatial Reasoning: A Cognitive Assessment of Allen s Calculus

Preferred Mental Models in Qualitative Spatial Reasoning: A Cognitive Assessment of Allen s Calculus Knauff, M., Rauh, R., & Schlieder, C. (1995). Preferred mental models in qualitative spatial reasoning: A cognitive assessment of Allen's calculus. In Proceedings of the Seventeenth Annual Conference of

More information

Keywords- Source coding, Huffman encoding, Artificial neural network, Multilayer perceptron, Backpropagation algorithm

Keywords- Source coding, Huffman encoding, Artificial neural network, Multilayer perceptron, Backpropagation algorithm Volume 4, Issue 5, May 2014 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Huffman Encoding

More information

INSTRUCTIONAL FOCUS DOCUMENT High School Courses Science/Chemistry

INSTRUCTIONAL FOCUS DOCUMENT High School Courses Science/Chemistry State Resources: Texas Education Agency STAAR Chemistry Reference Materials. Retrieved from http://www.tea.state.tx.us/student.assessment/staar/science/ (look under "Specific STAAR Resources," "Science").

More information

Lecture 4: Perceptrons and Multilayer Perceptrons

Lecture 4: Perceptrons and Multilayer Perceptrons Lecture 4: Perceptrons and Multilayer Perceptrons Cognitive Systems II - Machine Learning SS 2005 Part I: Basic Approaches of Concept Learning Perceptrons, Artificial Neuronal Networks Lecture 4: Perceptrons

More information

Outline. Limits of Bayesian classification Bayesian concept learning Probabilistic models for unsupervised and semi-supervised category learning

Outline. Limits of Bayesian classification Bayesian concept learning Probabilistic models for unsupervised and semi-supervised category learning Outline Limits of Bayesian classification Bayesian concept learning Probabilistic models for unsupervised and semi-supervised category learning Limitations Is categorization just discrimination among mutually

More information

Sections 18.6 and 18.7 Analysis of Artificial Neural Networks

Sections 18.6 and 18.7 Analysis of Artificial Neural Networks Sections 18.6 and 18.7 Analysis of Artificial Neural Networks CS4811 - Artificial Intelligence Nilufer Onder Department of Computer Science Michigan Technological University Outline Univariate regression

More information

CSCI 252: Neural Networks and Graphical Models. Fall Term 2016 Prof. Levy. Architecture #7: The Simple Recurrent Network (Elman 1990)

CSCI 252: Neural Networks and Graphical Models. Fall Term 2016 Prof. Levy. Architecture #7: The Simple Recurrent Network (Elman 1990) CSCI 252: Neural Networks and Graphical Models Fall Term 2016 Prof. Levy Architecture #7: The Simple Recurrent Network (Elman 1990) Part I Multi-layer Neural Nets Taking Stock: What can we do with neural

More information

Boolean circuits. Lecture Definitions

Boolean circuits. Lecture Definitions Lecture 20 Boolean circuits In this lecture we will discuss the Boolean circuit model of computation and its connection to the Turing machine model. Although the Boolean circuit model is fundamentally

More information

Alexander Klippel and Chris Weaver. GeoVISTA Center, Department of Geography The Pennsylvania State University, PA, USA

Alexander Klippel and Chris Weaver. GeoVISTA Center, Department of Geography The Pennsylvania State University, PA, USA Analyzing Behavioral Similarity Measures in Linguistic and Non-linguistic Conceptualization of Spatial Information and the Question of Individual Differences Alexander Klippel and Chris Weaver GeoVISTA

More information

Environmental Cognition and Perception I

Environmental Cognition and Perception I Environmental Cognition and Perception I Review: Spatial Interaction and Spatial Behavior II - Individual travel behavior - Activity space - Mental maps How we perceive the environment Maps in the head

More information

Pattern Recognition Prof. P. S. Sastry Department of Electronics and Communication Engineering Indian Institute of Science, Bangalore

Pattern Recognition Prof. P. S. Sastry Department of Electronics and Communication Engineering Indian Institute of Science, Bangalore Pattern Recognition Prof. P. S. Sastry Department of Electronics and Communication Engineering Indian Institute of Science, Bangalore Lecture - 27 Multilayer Feedforward Neural networks with Sigmoidal

More information

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others) Machine Learning Neural Networks (slides from Domingos, Pardo, others) For this week, Reading Chapter 4: Neural Networks (Mitchell, 1997) See Canvas For subsequent weeks: Scaling Learning Algorithms toward

More information

University of Genova - DITEN. Smart Patrolling. video and SIgnal Processing for Telecommunications ISIP40

University of Genova - DITEN. Smart Patrolling. video and SIgnal Processing for Telecommunications ISIP40 University of Genova - DITEN Smart Patrolling 1 Smart Patrolling Detection of the intruder Tracking of the intruder A cognitive node will active an operator, describing on his mobile terminal the characteristic

More information

Neural Networks. Chapter 18, Section 7. TB Artificial Intelligence. Slides from AIMA 1/ 21

Neural Networks. Chapter 18, Section 7. TB Artificial Intelligence. Slides from AIMA   1/ 21 Neural Networks Chapter 8, Section 7 TB Artificial Intelligence Slides from AIMA http://aima.cs.berkeley.edu / 2 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural

More information

Chapter 9: The Perceptron

Chapter 9: The Perceptron Chapter 9: The Perceptron 9.1 INTRODUCTION At this point in the book, we have completed all of the exercises that we are going to do with the James program. These exercises have shown that distributed

More information

The role of multiple representations in the understanding of ideal gas problems Madden S. P., Jones L. L. and Rahm J.

The role of multiple representations in the understanding of ideal gas problems Madden S. P., Jones L. L. and Rahm J. www.rsc.org/ xxxxxx XXXXXXXX The role of multiple representations in the understanding of ideal gas problems Madden S. P., Jones L. L. and Rahm J. Published in Chemistry Education Research and Practice,

More information

Reification of Boolean Logic

Reification of Boolean Logic 526 U1180 neural networks 1 Chapter 1 Reification of Boolean Logic The modern era of neural networks began with the pioneer work of McCulloch and Pitts (1943). McCulloch was a psychiatrist and neuroanatomist;

More information

Creative Genomic Webs -Kapil Rajaraman PHY 498BIO, HW 4

Creative Genomic Webs -Kapil Rajaraman PHY 498BIO, HW 4 Creative Genomic Webs -Kapil Rajaraman (rajaramn@uiuc.edu) PHY 498BIO, HW 4 Evolutionary progress is generally considered a result of successful accumulation of mistakes in replication of the genetic code.

More information

Variational Autoencoders. Presented by Alex Beatson Materials from Yann LeCun, Jaan Altosaar, Shakir Mohamed

Variational Autoencoders. Presented by Alex Beatson Materials from Yann LeCun, Jaan Altosaar, Shakir Mohamed Variational Autoencoders Presented by Alex Beatson Materials from Yann LeCun, Jaan Altosaar, Shakir Mohamed Contents 1. Why unsupervised learning, and why generative models? (Selected slides from Yann

More information

AP Curriculum Framework with Learning Objectives

AP Curriculum Framework with Learning Objectives Big Ideas Big Idea 1: The process of evolution drives the diversity and unity of life. AP Curriculum Framework with Learning Objectives Understanding 1.A: Change in the genetic makeup of a population over

More information

INTRODUCTION TO LOGIC 1 Sets, Relations, and Arguments. Why logic? Arguments

INTRODUCTION TO LOGIC 1 Sets, Relations, and Arguments. Why logic? Arguments The Logic Manual INTRODUCTION TO LOGIC 1 Sets, Relations, and Arguments Volker Halbach Pure logic is the ruin of the spirit. Antoine de Saint-Exupéry The Logic Manual web page for the book: http://logicmanual.philosophy.ox.ac.uk/

More information

Computational Tasks and Models

Computational Tasks and Models 1 Computational Tasks and Models Overview: We assume that the reader is familiar with computing devices but may associate the notion of computation with specific incarnations of it. Our first goal is to

More information

Warm-Up Problem. Is the following true or false? 1/35

Warm-Up Problem. Is the following true or false? 1/35 Warm-Up Problem Is the following true or false? 1/35 Propositional Logic: Resolution Carmen Bruni Lecture 6 Based on work by J Buss, A Gao, L Kari, A Lubiw, B Bonakdarpour, D Maftuleac, C Roberts, R Trefler,

More information

Enduring understanding 1.A: Change in the genetic makeup of a population over time is evolution.

Enduring understanding 1.A: Change in the genetic makeup of a population over time is evolution. The AP Biology course is designed to enable you to develop advanced inquiry and reasoning skills, such as designing a plan for collecting data, analyzing data, applying mathematical routines, and connecting

More information

Unit 8: Introduction to neural networks. Perceptrons

Unit 8: Introduction to neural networks. Perceptrons Unit 8: Introduction to neural networks. Perceptrons D. Balbontín Noval F. J. Martín Mateos J. L. Ruiz Reina A. Riscos Núñez Departamento de Ciencias de la Computación e Inteligencia Artificial Universidad

More information

The Perceptron. Volker Tresp Summer 2016

The Perceptron. Volker Tresp Summer 2016 The Perceptron Volker Tresp Summer 2016 1 Elements in Learning Tasks Collection, cleaning and preprocessing of training data Definition of a class of learning models. Often defined by the free model parameters

More information

Course Structure. Psychology 452 Week 12: Deep Learning. Chapter 8 Discussion. Part I: Deep Learning: What and Why? Rufus. Rufus Processed By Fetch

Course Structure. Psychology 452 Week 12: Deep Learning. Chapter 8 Discussion. Part I: Deep Learning: What and Why? Rufus. Rufus Processed By Fetch Psychology 452 Week 12: Deep Learning What Is Deep Learning? Preliminary Ideas (that we already know!) The Restricted Boltzmann Machine (RBM) Many Layers of RBMs Pros and Cons of Deep Learning Course Structure

More information

Fuzzy Systems. Introduction

Fuzzy Systems. Introduction Fuzzy Systems Introduction Prof. Dr. Rudolf Kruse Christoph Doell {kruse,doell}@iws.cs.uni-magdeburg.de Otto-von-Guericke University of Magdeburg Faculty of Computer Science Department of Knowledge Processing

More information

4 Pictorial proofs. 1. I convert 40 C to Fahrenheit: = I react: Wow, 104 F. That s dangerous! Get thee to a doctor!

4 Pictorial proofs. 1. I convert 40 C to Fahrenheit: = I react: Wow, 104 F. That s dangerous! Get thee to a doctor! 4 Pictorial proofs 4. Adding odd numbers 58 4. Arithmetic and geometric means 60 4. Approximating the logarithm 66 4.4 Bisecting a triangle 70 4.5 Summing series 7 4.6 Summary and further problems 75 Have

More information

Deep Feedforward Networks

Deep Feedforward Networks Deep Feedforward Networks Yongjin Park 1 Goal of Feedforward Networks Deep Feedforward Networks are also called as Feedforward neural networks or Multilayer Perceptrons Their Goal: approximate some function

More information

Where linguistic meaning meets non-linguistic cognition

Where linguistic meaning meets non-linguistic cognition Where linguistic meaning meets non-linguistic cognition Tim Hunter and Paul Pietroski NASSLLI 2016 Friday: Putting things together (perhaps) Outline 5 Experiments with kids on most 6 So: How should we

More information

How to do backpropagation in a brain

How to do backpropagation in a brain How to do backpropagation in a brain Geoffrey Hinton Canadian Institute for Advanced Research & University of Toronto & Google Inc. Prelude I will start with three slides explaining a popular type of deep

More information

The Importance of Spatial Literacy

The Importance of Spatial Literacy The Importance of Spatial Literacy Dr. Michael Phoenix GIS Education Consultant Taiwan, 2009 What is Spatial Literacy? Spatial Literacy is the ability to be able to include the spatial dimension in our

More information

Kaplan s Paradox and Epistemically Possible Worlds

Kaplan s Paradox and Epistemically Possible Worlds Kaplan s Paradox and Epistemically Possible Worlds 1. Epistemically possible worlds David Chalmers Metaphysically possible worlds: S is metaphysically possible iff S is true in some metaphysically possible

More information

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD WHAT IS A NEURAL NETWORK? The simplest definition of a neural network, more properly referred to as an 'artificial' neural network (ANN), is provided

More information

Neural networks (not in book)

Neural networks (not in book) (not in book) Another approach to classification is neural networks. were developed in the 1980s as a way to model how learning occurs in the brain. There was therefore wide interest in neural networks

More information

Practicals 5 : Perceptron

Practicals 5 : Perceptron Université Paul Sabatier M2 SE Data Mining Practicals 5 : Perceptron Framework The aim of this last session is to introduce the basics of neural networks theory through the special case of the perceptron.

More information

Lesson 15: Structure in Graphs of Polynomial Functions

Lesson 15: Structure in Graphs of Polynomial Functions 150 Lesson 15: Structure in Graphs of Polynomial Functions Student Outcomes Students graph polynomial functions and describe end behavior based upon the degree of the polynomial. Lesson Notes So far in

More information

Competitive Learning for Deep Temporal Networks

Competitive Learning for Deep Temporal Networks Competitive Learning for Deep Temporal Networks Robert Gens Computer Science and Engineering University of Washington Seattle, WA 98195 rcg@cs.washington.edu Pedro Domingos Computer Science and Engineering

More information

Structuralism and the Limits of Skepticism. David Chalmers Thalheimer Lecture 3

Structuralism and the Limits of Skepticism. David Chalmers Thalheimer Lecture 3 Structuralism and the Limits of Skepticism David Chalmers Thalheimer Lecture 3 Skepticism and Realism I Skepticism: We don t know whether external things exist Realism: External things exist Anti-Realism:

More information

Russell s logicism. Jeff Speaks. September 26, 2007

Russell s logicism. Jeff Speaks. September 26, 2007 Russell s logicism Jeff Speaks September 26, 2007 1 Russell s definition of number............................ 2 2 The idea of reducing one theory to another.................... 4 2.1 Axioms and theories.............................

More information

Neural Networks and the Back-propagation Algorithm

Neural Networks and the Back-propagation Algorithm Neural Networks and the Back-propagation Algorithm Francisco S. Melo In these notes, we provide a brief overview of the main concepts concerning neural networks and the back-propagation algorithm. We closely

More information

Big Idea 1: The process of evolution drives the diversity and unity of life.

Big Idea 1: The process of evolution drives the diversity and unity of life. Big Idea 1: The process of evolution drives the diversity and unity of life. understanding 1.A: Change in the genetic makeup of a population over time is evolution. 1.A.1: Natural selection is a major

More information

Improved TBL algorithm for learning context-free grammar

Improved TBL algorithm for learning context-free grammar Proceedings of the International Multiconference on ISSN 1896-7094 Computer Science and Information Technology, pp. 267 274 2007 PIPS Improved TBL algorithm for learning context-free grammar Marcin Jaworski

More information

A Probabilistic Relational Model for Characterizing Situations in Dynamic Multi-Agent Systems

A Probabilistic Relational Model for Characterizing Situations in Dynamic Multi-Agent Systems A Probabilistic Relational Model for Characterizing Situations in Dynamic Multi-Agent Systems Daniel Meyer-Delius 1, Christian Plagemann 1, Georg von Wichert 2, Wendelin Feiten 2, Gisbert Lawitzky 2, and

More information

Computational Models of Human Cognition

Computational Models of Human Cognition Computational Models of Human Cognition Models A model is a means of representing the structure or workings of a system or object. e.g., model car e.g., economic model e.g., psychophysics model 2500 loudness

More information

A Three-dimensional Physiologically Realistic Model of the Retina

A Three-dimensional Physiologically Realistic Model of the Retina A Three-dimensional Physiologically Realistic Model of the Retina Michael Tadross, Cameron Whitehouse, Melissa Hornstein, Vicky Eng and Evangelia Micheli-Tzanakou Department of Biomedical Engineering 617

More information

Archdiocese of Washington Catholic Schools Academic Standards Science 5 th Grade

Archdiocese of Washington Catholic Schools Academic Standards Science 5 th Grade 5 th Grade Standard 1 - The Nature of and Technology Students work collaboratively to carry out investigations. They observe and make accurate measurements, increase their use of tools and instruments,

More information

Neural Networks. Textbook. Other Textbooks and Books. Course Info. (click on

Neural Networks. Textbook. Other Textbooks and Books. Course Info.  (click on 636-600 Neural Networks Textbook Instructor: Yoonsuck Choe Contact info: HRBB 322B, 45-5466, choe@tamu.edu Web page: http://faculty.cs.tamu.edu/choe Simon Haykin, Neural Networks: A Comprehensive Foundation,

More information

Lecture 15: Exploding and Vanishing Gradients

Lecture 15: Exploding and Vanishing Gradients Lecture 15: Exploding and Vanishing Gradients Roger Grosse 1 Introduction Last lecture, we introduced RNNs and saw how to derive the gradients using backprop through time. In principle, this lets us train

More information

How to do backpropagation in a brain. Geoffrey Hinton Canadian Institute for Advanced Research & University of Toronto

How to do backpropagation in a brain. Geoffrey Hinton Canadian Institute for Advanced Research & University of Toronto 1 How to do backpropagation in a brain Geoffrey Hinton Canadian Institute for Advanced Research & University of Toronto What is wrong with back-propagation? It requires labeled training data. (fixed) Almost

More information

Data Informatics. Seon Ho Kim, Ph.D.

Data Informatics. Seon Ho Kim, Ph.D. Data Informatics Seon Ho Kim, Ph.D. seonkim@usc.edu What is Machine Learning? Overview slides by ETHEM ALPAYDIN Why Learn? Learn: programming computers to optimize a performance criterion using example

More information

Neural Networks for Machine Learning. Lecture 11a Hopfield Nets

Neural Networks for Machine Learning. Lecture 11a Hopfield Nets Neural Networks for Machine Learning Lecture 11a Hopfield Nets Geoffrey Hinton Nitish Srivastava, Kevin Swersky Tijmen Tieleman Abdel-rahman Mohamed Hopfield Nets A Hopfield net is composed of binary threshold

More information

Neural Networks and Fuzzy Logic Rajendra Dept.of CSE ASCET

Neural Networks and Fuzzy Logic Rajendra Dept.of CSE ASCET Unit-. Definition Neural network is a massively parallel distributed processing system, made of highly inter-connected neural computing elements that have the ability to learn and thereby acquire knowledge

More information

Latent Variable Models Probabilistic Models in the Study of Language Day 4

Latent Variable Models Probabilistic Models in the Study of Language Day 4 Latent Variable Models Probabilistic Models in the Study of Language Day 4 Roger Levy UC San Diego Department of Linguistics Preamble: plate notation for graphical models Here is the kind of hierarchical

More information

The Duality of the Universe

The Duality of the Universe The Duality of the Universe Gordon McCabe May 23, 2008 Abstract It is proposed that the physical universe is an instance of a mathematical structure which possesses a dual structure, and that this dual

More information

Introduction to Metalogic

Introduction to Metalogic Philosophy 135 Spring 2008 Tony Martin Introduction to Metalogic 1 The semantics of sentential logic. The language L of sentential logic. Symbols of L: Remarks: (i) sentence letters p 0, p 1, p 2,... (ii)

More information

Chapter 1 Introduction

Chapter 1 Introduction Chapter 1 Introduction 1.1 Introduction to Chapter This chapter starts by describing the problems addressed by the project. The aims and objectives of the research are outlined and novel ideas discovered

More information

CMSC 421: Neural Computation. Applications of Neural Networks

CMSC 421: Neural Computation. Applications of Neural Networks CMSC 42: Neural Computation definition synonyms neural networks artificial neural networks neural modeling connectionist models parallel distributed processing AI perspective Applications of Neural Networks

More information

Solving Quadratic Equations Using Multiple Methods and Solving Systems of Linear and Quadratic Equations

Solving Quadratic Equations Using Multiple Methods and Solving Systems of Linear and Quadratic Equations Algebra 1, Quarter 4, Unit 4.1 Solving Quadratic Equations Using Multiple Methods and Solving Systems of Linear and Quadratic Equations Overview Number of instructional days: 13 (1 day = 45 minutes) Content

More information

Artificial Neural Networks. Q550: Models in Cognitive Science Lecture 5

Artificial Neural Networks. Q550: Models in Cognitive Science Lecture 5 Artificial Neural Networks Q550: Models in Cognitive Science Lecture 5 "Intelligence is 10 million rules." --Doug Lenat The human brain has about 100 billion neurons. With an estimated average of one thousand

More information

Relations and Functions

Relations and Functions Algebra 1, Quarter 2, Unit 2.1 Relations and Functions Overview Number of instructional days: 10 (2 assessments) (1 day = 45 60 minutes) Content to be learned Demonstrate conceptual understanding of linear

More information

Machine Learning. Boris

Machine Learning. Boris Machine Learning Boris Nadion boris@astrails.com @borisnadion @borisnadion boris@astrails.com astrails http://astrails.com awesome web and mobile apps since 2005 terms AI (artificial intelligence)

More information