Human Visual System Neural Network
|
|
- Camron Norris
- 5 years ago
- Views:
Transcription
1 Proceedings of Student-Faculty Research Day, CSIS, Pace University, May 7 th, 2010 Human Visual System Neural Network Stanley Alphonso, Imran Afzal, Anand Phadake, Putta Reddy Shankar, and Charles Tappert Pace University Seidenberg School of CSIS, White Plains, NY 10606, USA {sa08400n, ia93345n, ap08653n, ps08615n, ctappert}@pace.edu Abstract The design of most Artificial Neural Networks for visual pattern recognition does not utilize our knowledge of the human visual system. Nobel Prize winners Hubel and Wiesel discovered line and edge detectors in the visual cortex of cats, and the experiments described in this study model such detectors to improve the accuracy and efficiency of neural networks for character recognition. The results obtained showed that accuracy and efficiency of a neural network system using such detectors in the early layers is superior to one without such detectors that uses adjustable weights directly from the retina. 1. Introduction This section briefly describes the visual system, the discovery by Nobel Laureates Hubel and Wiesel of line and edge detectors in the visual cortex, the field of computational neuroscience that has developed biologically detailed models of the visual system, the area of machine learning that uses simplified neural network models called Artificial Neural Networks (ANNs) in pattern recognition tasks, and a review of studies using ANNs that include line and/or edge detectors. The present study compares the use of ANNs with and without line detectors in the classification of the straight line characters of the English alphabet. The Visual System The visual system of mammals is the part of the nervous system that allows organisms to see, interpreting the information from visible light to build a representation of the world. It consists of the eye, the optic nerve, the lateral geniculate nucleus, and the visual cortex. Hubel and Wiesel won the 1981 Nobel Prize in Physiology/Medicine for their seminal work in the early 1060s [1, 2, 3]. They used microelectrodes to measure activity in individual cells in the visual cortex of cats. The cats were anesthetized and there eyes were open with the controlling muscles paralyzed so as to fix the stare in a specific direction. They discovered that a given cell can be specifically sensitive to a line of light at a specific orientation moving in a certain direction across a specific region of the visual field. These observations are explained by assuming certain patterns of synaptic connections to cells in the lateral geniculate body (a relay station in the visual pathway). A key discovery by Hubel and Wiesel is the existence of line and edge detectors in the visual cortex. Biological Simulations of the Visual System The work of Hubel and Wiesel was instrumental in the creation of what is now called computational neuroscience, studying brain function in terms of information processing properties of structures that make up the nervous system. Computational neuroscience is somewhat distinct from psychological connectionism and theories of learning from disciplines such as machine learning, neural networks and statistical learning theory in that it emphasizes descriptions of functional and biologically realistic neurons (and neural systems) and their physiology and dynamics. These models capture the essential features of the biological system at multiple spatial-temporal scales, from membrane currents, protein and chemical coupling to network oscillations, columnar and topographic architecture and learning and memory. These computational models are used to frame hypotheses that can be directly tested by current or future biological and/or psychological experiments. [4] Biologically detailed models of the brain can now be simulated due to increasingly powerful massively parallel supercomputers. Using biophysical model neurons we have simulated a neuronal network model of layers II/III of the neocortex. These simulations, carried out on the Blue Gene/L supercomputer, comprise up to 22 million neurons and 11 billion synapses and are the largest simulations of this type ever performed. Such model sizes correspond to the cortex of a small mammal. The SPLIT library, used for these simulations, runs on several platforms, from single-processor to massively parallel machines. Performance measurements show good scaling A1.1
2 behavior on the Blue Gene/L supercomputer up to 8192 processors. Several key phenomena seen in the living brain appear as emergent features in the simulations. [5] 18 November 2009 Scientists and engineers at IBM s Almaden Research Center, in San Jose, Calif., announced today at the Supercomputing Conference (SC09) in Portland, Ore., that they have created the largest brain simulation to date on a supercomputer. The number of neurons and synapses in the simulation exceed those in a cat s brain; previous simulations have reached only the level of mouse and rat brains. Experts predict that the simulation will have profound effects in two arenas: It will lead to a better understanding of how the brain s architecture leads to cognition, and it should inspire the design of electronics that mimic the brain s as-yetunmatched ability to do complex computation and learn using a small volume of hardware that consumes little power. [6] Machine Learning and Neural Networks Feed-forward Artificial Neural Networks (ANNs) allow signals to travel one way only from input to output (Figure 1). There is no feedback (looping), i.e. the output of any layer does not affect the same or earlier layers. Feed-forward ANNs tend to be straightforward networks that associate inputs with outputs, and they are extensively used in pattern recognition [7, 8]. Figure 1. A simple feedforward network [8]. The commonest type of artificial neural network consists of three groups, or layers, of units: a layer of input units is connected to a layer of hidden units, which is connected to a layer of output units as shown in Figure 1. The activity of the input units represents the raw information that is fed into the network. The activity of each hidden unit is determined by the activities of the input units and the weights on the connections between the input and the hidden units. The behavior of the output units depends on the activity of the hidden units and the weights between the hidden and output units. In order to train a neural network to perform some task, we must adjust the weights of each unit in such a way that the error between the desired output and the actual output is reduced. This process requires that the neural network compute the error derivative of the weights. In other words, it must calculate how the error changes as each weight is increased or decreased slightly. The back propagation algorithm is the most widely used method for determining the error derivative of the weights. ANNs are machine learning programs based on neuronlike building blocks similar to the neurons in the human brain. Most of the research and applications of neural networks involves feed-forward networks trained by the back-propagation algorithm. These ANNs usually undergo a training phase by feeding it a set of inputs with known outcome, and then back-propagating the known results to adjust the weights among the neural elements. After many iterations of training, called epochs, the NN is able to detect subtle patterns in large data sets and make predictions based on what it has learned through past observations. ANNs Using Line and/or Edge Detectors This section reviews pattern recognition systems that have employed the type of simple line and edge detectors used in this study. One study used line and edge detectors in four orientations 0, 45, 90, and 135 in a pattern recognition algorithm used on Geographic Information System (GIS) images and maps [9]. These line detectors were developed in an earlier study [10]. A scheme for detecting edges and lines in polarimetric Synthetic Aperture Radar (SAR) images was proposed [11]. The line detector was constructed from edge detectors. A traditional bright line extraction process vectorized the raw results and the scheme extracted dark linear structures on various full-polarimetric SAR images. Line detection can be done using edge techniques such as Sobel, Prewitt, Laplacian Gaussian, Zero Crossing and Canny edge detector [12]. Edge detection using Sobel yields pixels lying only on edges and these edges maybe incomplete due to factors such as breaks, noise due to non uniform illumination and intensity discontinuity. Line detection using Hough Transform uses edge detection followed by a linking algorithm to assemble pixels into meaningful edges by considering a point and all the lines that passes through it that can satisfy the slope intercept equation. This Study This study deals with line and edge detectors modeled on the human visual system. The design of most Artificial Neural Networks (ANNs) for visual pattern recognition, however, does not correspond with our knowledge of the human brain. The experiments below attempt to design neural networks that simulate line and edge detectors known to exist in the human visual cortex. The main objective of the experiments is to demonstrate good visual pattern recognition using pre-wired line and edge detectors similar to those of the human visual cortex. Specifically, we show that the accuracy and efficiency of A1.2
3 a neural network system using such detectors in the early layers is superior to one using adjustable weights directly from the retina. 2. Methodology Two pattern classification experiments were performed on the straight line characters of the uppercase English alphabet. The six alphabet character patterns were those consisting of horizontal and vertical line segments, that is the uppercase letters E, F, H, I, L, T. Each of these characters was represented by a 5x7 bit pattern as shown in Figure 2. ***** ***** * * * * ***** **** **** ***** * * * ***** * * * * ***** * Figure 2. Alphabetic input patterns. Experiment 1 ANN without line detectors A simple three-layer feed-forward ANN (Figure 3) was trained to recognize the straight-line alphabetic characters. The input layer modeled a 20x20 retina of binary (0 or 1) units for a total of 400 units. The hidden layer consisted of 50 units, fully connected to the first layer for a total of 20,000 (400x50) variable weights. The output layer consisted of six units, one for each of the six straight-line alphabetic characters, and was fully connected to the hidden layer for a total of 300 (50x6) variable weights. The overall network contained a total of 20,300 variable weights that required training. character pattern. The 168 non-retinal-edge retinal images for "E" can be labeled E(i,j) where the vertical (yaxis) position i varies from 2 to 13 and the horizontal (xaxis) position j from 2 to 15. E(i,j) can be generated by moving the upper-left-corner of the E bit-pattern down i pixels and right j pixels. For example, the retinal images for E(2,2) and E(12,5) are shown in Figure Figure 4. Input E(2,2) and E(12,5) patterns on the retina. For training input patterns, 40 random non-identical positions (randomly choosing i and j without replacement) were generated for each of the 6 characters, for a total of 240 (40*6) training input patterns. For testing, an additional 240 input patterns were similarly generated. These patterns were pre-generated to facilitate training and testing. The sequence of training patterns cycled through the sequence E, F, H, I, L, T forty times for one pass (epoch) of training through the 240 patterns. The system was trained to produce a positive output-unit response for the correct character and negative output-unit responses for the incorrect characters. Experiment 2 ANN with line detectors The ANN for this experiment incorporates horizontal and vertical line detectors into the model (Figure 5). It has five layers: input units, simple line detectors, complex line detectors, hidden units, and output units. Figure 3. Neural network for experiment 1. The alphabet character patterns could be placed in any position inside the 20x20 retina not adjacent to an edge. Shifting the 5x7 bit pattern around the retina yields 168 (12*14) possible non-retinal-edge positions of an alphabet The input layer and input character patterns are as in experiment 1, as are the hidden and output layers. The second layer consists of simple line detectors for detecting horizontal and vertical lines (Figure 6). Each of these simple line detectors had a threshold of 3. The input to a detector unit, when superimposed on an input pattern, is determined by adding the +'s and subtracting the -'s of A1.3
4 underlying active input-pattern units. The pluses indicate excitatory and the minuses inhibitory connections. Then, if the unit s input value is equal to or greater than the threshold, the detector is activated (set to 1), otherwise it is inactive (set to 0). Figure 7. The layout of the 24 complex vertical line detectors, and their corresponding 12 simple vertical line detector centers. Figure 5. Neural network for experiment 2. Horizontal Vertical Figure 6. Simple vertical and horizontal line detectors. The simple horizontal and vertical line detectors are overlapping and cover each of the possible retinal positions. There are a total of 288 (16*18) horizontal line detectors and 288 vertical line detectors for a total of 576 simple line detectors. The third layer consists of a relatively small number of complex line detectors, each taking inputs from a nonoverlapping local subset of simple line detectors. Each complex line detector takes inputs from 12 simple line detectors in a 3x4 grid (3 horizontal and 4 vertical for a vertical line detector, and vice versa for a horizontal line detector). The centers of the 12 simple vertical line detectors that feed the various complex vertical line detectors are shown in Figure 7. Each simple line detector feeds only one complex line detector. If any of the 12 simple line detectors feeding a complex line detector is activated, the output of the complex detector is set to 1 and otherwise set to 0. The complex line detector grids are non-overlapping. The number of complex vertical and horizontal line detectors is 24 (288/12) each, for a total of 48 complex line detectors. We arranged the outputs of the complex vertical line detectors in a 4x6 grid (4 vertical positions and 6 horizontal as shown in Figure 7) and the outputs of the complex horizontal line detectors in a 6x4 grid (6 vertical positions and 6 horizontal). The output of the complex detectors for E(1,1) is shown in Figure 8. vertical horizontal Figure 8. Complex line detector outputs for E(1,1). The two activated (output = 1) complex vertical line detectors respond to the upper and lower portions of the vertical bar of the E and the three activated horizontal line detectors respond to the three horizontal bars of the E. We can rearrange these outputs into one vector, first the vertical then the horizontal, of 48 detectors, and these 48- detector outputs can be pre-computed for each of the input letter patterns used for training and testing. Figure 9 shows the 48-detector outputs for E(1,1) plus five examples of the complex line detector outputs for each of the alphabet characters. The overall network consisted of 6,912 ( ) fixed weights and 2,700 ( ) variable weights. The experiment 2 neural network was trained and tested in the same manner as the experiment 1 network. A1.4
5 vertical horizontal E(2,2) E(12,5) E(9,8) E(7,2) E(11,8) E(9,12) F(12,5) F(9,8) F(7,2) F(11,8) F(9,12) H(12,5) H(9,8) H(7,2) H(11,8) H(9,12) I(12,5) I(9,8) I(7,2) I(11,8) I(9,12) L(12,5) L(9,8) L(7,2) L(11,8) L(9,12) T(12,5) T(9,8) T(7,2) T(11,8) T(9,2) Figure 9. Complex detectors for example characters. 3. Experimental Results Experiment 1 ANN without line detectors The system was trained for a specified number of epochs using backpropagation to train the weights between the retina and hidden layer, and between the hidden layer and output layer. Table 1 shows the results with 10 hiddenlayer units, and includes the number of training epochs, the training time, the percent accuracy on the 240 training patterns, and the percent accuracy on the 240 test patterns. In each case training ran for the full number of epochs. Due to the long training times it was not possible to obtain results with 50 hidden-layer units. Epochs Time Testing 50 ~2.5 hr 100% 26.7% 100 ~4 hr 100% 28.3% 200 ~8 hr 100% 28.8% 400 ~16 hr 100% 30.4% 800 ~30 hr 100% 28.3% 1600 ~2 days 100% 23.8% Average 100% 27.7% Table 1. Results with 10 hidden-layer units. Experiment 2 ANN with line detectors The complex line detectors were pre-computed for each input pattern. The system was trained for a specified number of epochs using backpropagation to train the weights between the layer of complex line detector units and the hidden layer, and between the hidden layer and output layer. Table 2 shows the results using 10 and Table 3 the results using 50 hidden-layer units. Epochs Time Testing 50 0:37 min 47.5% 37.5% 100 0:26 min 100.0% 63.3% 200 0:51 min 100.0% 68.8% 400 2:28 min 71.3% 50.8% 800 3:37 min 100.0% 67.9% :42 min 95.8% 56.7% Average 85.8% 57.5% Table 2. Results with 10 hidden-layer units. Epochs Set/ Attained Time Testing 50/8 41 sec 100% 70.0% 100/9 45 sec 100% 69.8% 200/10 48 sec 100% 71.9% 400/10 49 sec 100% 77.1% 800/8 41 sec 100% 72.5% 1600/9 45 sec 100% 71.3% Average 100% 72.1% Table 3. Results with 50 hidden-layer units. Table 4 shows the confusion matrix for the 400 epoch run above. There were 40 different images per alphabet character. The greatest confusions were E versus F ( %) and F versus T (5 and 20%). Out E F H I L T In E F H I L T Table 4. Confusion matrix, overall accuracy of 77.1%. 4. Conclusions The main conclusion is that character recognition accuracy and efficiency of a neural network using Hubel- Wiesel-like line detectors in the early layers is superior to that of a network using adjustable weights directly from the retina. The recognition accuracies of the systems are show in Figure 10. A1.5
6 No line detectors 10 hidden units Line detectors 10 hidden units Figure 10. Comparison of Recognition Accuracies. Line detectors 50 hidden units The numbers of fixed and variable weights in the two experiments is shown in Table 5. The fewer variable weights that require training in the ANN with line detectors resulted in a more efficient network, decreasing training time by several orders of magnitude. Experiment Fixed Weights Variable Weights Total Weights 1 No Line Detectors 0 20,300 20,300 2 Line Detectors 6,912 2,700 9,612 Table 5. Fixed and variable weights in experiments 1 and Future Work The strength of the study was its simplicity; its weakness was also it simplicity and that the line detectors appear to be designed specifically for the pattern to be classified. The above experiments can be elaborated by increasing the alphabet to the full 26 uppercase letters, the first three as shown in Figure 11. * **** *** * * * * * * * * * * * * * **** * ***** * * * * * * * * * * * **** *** Figure 11. First three alphabet character patterns. The second layer of simple line detectors can be doubled to provide detectors at four angles in increments of 45 degrees, at angles of 0, 45, 90, and 135 degrees. For detecting edges we can add the simple edge detectors (potential horizontal ones are shown in Figure 12) and corresponding complex edge detectors. The output layer can increased to 26 units, one for each of the characters to be recognized and --- Figure 12. Edge detectors for horizontal edges. Experiments could be conducted with both clean and noisy input. Random noise can be added to the input layer patterns: 2% (a random 8 of the 400 input units changed, by either change 1 to 0, or 0 to 1), 5%, 10%, 15%, and 20% noise. References [1] Hubel DH and Wiesel TN (1962). "Receptive fields, binocular interaction and functional architecture in the cat's visual cortex". J. Physiol. (Lond.) 160: pmidlookup?view=long&pmid= [2] Wikipedia, [3] Hubel DH and Wiesel TN. Brain and Visual Perception The Story of a 25-Year Collaboration, Oxford University Press, New York, ISBN [4] What is computational neuroscience? Patricia S. Churchland, Christof Koch, Terrence J. Sejnowski. in Computational Neuroscience pp Edited by Eric L. Schwartz MIT Press. [5] Djurfeldt, M. et al., Massively parallel simulation of brain-scale neuronal network models, Project report for Blue Gene Watson Consortium Days, 2006, Past_Results/ kth%20bgwreport06.pdf [6] Adee, S, IBM Unveils a New Brain Simulator, IEEE Spectrum, November 2009, [7] Artificial Neural Networks, Artificial Intelligence Technologies tutorial, [8] Stergiou, C. and Siganos, D., Neural Networks, report.html#why%20use%20neural%20networks. [9] Funtanilla, L.A. GIS Pattern Recognition and Rejection Analysis Using MATLAB, Proc. ESRI, [10] T.S. Chan, R.K.K. Yip, "Line Detection Algorithm," 13th Int. Conf. Pattern Recognition (ICPR'96), Vol. 2, 1996, pp.126. [11] D. Borghys, V. Lacroix, C. Perneel, "Edge and Line Detection in Polarimetric SAR Images," 16th Int. Conf. Pattern Recognition (ICPR'02) Vol. 2, 2002, pp [12] Wikipedia, A1.6
Artificial Neural Network
Artificial Neural Network Contents 2 What is ANN? Biological Neuron Structure of Neuron Types of Neuron Models of Neuron Analogy with human NN Perceptron OCR Multilayer Neural Network Back propagation
More informationÂngelo Cardoso 27 May, Symbolic and Sub-Symbolic Learning Course Instituto Superior Técnico
BIOLOGICALLY INSPIRED COMPUTER MODELS FOR VISUAL RECOGNITION Ângelo Cardoso 27 May, 2010 Symbolic and Sub-Symbolic Learning Course Instituto Superior Técnico Index Human Vision Retinal Ganglion Cells Simple
More informationArtificial Intelligence (AI) Common AI Methods. Training. Signals to Perceptrons. Artificial Neural Networks (ANN) Artificial Intelligence
Artificial Intelligence (AI) Artificial Intelligence AI is an attempt to reproduce intelligent reasoning using machines * * H. M. Cartwright, Applications of Artificial Intelligence in Chemistry, 1993,
More informationARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD
ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD WHAT IS A NEURAL NETWORK? The simplest definition of a neural network, more properly referred to as an 'artificial' neural network (ANN), is provided
More informationLecture 7 Artificial neural networks: Supervised learning
Lecture 7 Artificial neural networks: Supervised learning Introduction, or how the brain works The neuron as a simple computing element The perceptron Multilayer neural networks Accelerated learning in
More informationA Three-dimensional Physiologically Realistic Model of the Retina
A Three-dimensional Physiologically Realistic Model of the Retina Michael Tadross, Cameron Whitehouse, Melissa Hornstein, Vicky Eng and Evangelia Micheli-Tzanakou Department of Biomedical Engineering 617
More informationArtificial Neural Networks. Q550: Models in Cognitive Science Lecture 5
Artificial Neural Networks Q550: Models in Cognitive Science Lecture 5 "Intelligence is 10 million rules." --Doug Lenat The human brain has about 100 billion neurons. With an estimated average of one thousand
More informationCS 4700: Foundations of Artificial Intelligence
CS 4700: Foundations of Artificial Intelligence Prof. Bart Selman selman@cs.cornell.edu Machine Learning: Neural Networks R&N 18.7 Intro & perceptron learning 1 2 Neuron: How the brain works # neurons
More informationArtificial Neural Network and Fuzzy Logic
Artificial Neural Network and Fuzzy Logic 1 Syllabus 2 Syllabus 3 Books 1. Artificial Neural Networks by B. Yagnanarayan, PHI - (Cover Topologies part of unit 1 and All part of Unit 2) 2. Neural Networks
More informationMachine Learning. Neural Networks. (slides from Domingos, Pardo, others)
Machine Learning Neural Networks (slides from Domingos, Pardo, others) Human Brain Neurons Input-Output Transformation Input Spikes Output Spike Spike (= a brief pulse) (Excitatory Post-Synaptic Potential)
More informationNeural Networks Introduction
Neural Networks Introduction H.A Talebi Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Winter 2011 H. A. Talebi, Farzaneh Abdollahi Neural Networks 1/22 Biological
More informationConvolutional neural networks
11-1: Convolutional neural networks Prof. J.C. Kao, UCLA Convolutional neural networks Motivation Biological inspiration Convolution operation Convolutional layer Padding and stride CNN architecture 11-2:
More informationA Novel Activity Detection Method
A Novel Activity Detection Method Gismy George P.G. Student, Department of ECE, Ilahia College of,muvattupuzha, Kerala, India ABSTRACT: This paper presents an approach for activity state recognition of
More informationIntroduction Biologically Motivated Crude Model Backpropagation
Introduction Biologically Motivated Crude Model Backpropagation 1 McCulloch-Pitts Neurons In 1943 Warren S. McCulloch, a neuroscientist, and Walter Pitts, a logician, published A logical calculus of the
More information(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann
(Feed-Forward) Neural Networks 2016-12-06 Dr. Hajira Jabeen, Prof. Jens Lehmann Outline In the previous lectures we have learned about tensors and factorization methods. RESCAL is a bilinear model for
More informationHierarchy. Will Penny. 24th March Hierarchy. Will Penny. Linear Models. Convergence. Nonlinear Models. References
24th March 2011 Update Hierarchical Model Rao and Ballard (1999) presented a hierarchical model of visual cortex to show how classical and extra-classical Receptive Field (RF) effects could be explained
More informationCMSC 421: Neural Computation. Applications of Neural Networks
CMSC 42: Neural Computation definition synonyms neural networks artificial neural networks neural modeling connectionist models parallel distributed processing AI perspective Applications of Neural Networks
More informationModeling retinal high and low contrast sensitivity lters. T. Lourens. Abstract
Modeling retinal high and low contrast sensitivity lters T. Lourens Department of Computer Science University of Groningen P.O. Box 800, 9700 AV Groningen, The Netherlands E-mail: tino@cs.rug.nl Abstract
More informationIn the Name of God. Lecture 9: ANN Architectures
In the Name of God Lecture 9: ANN Architectures Biological Neuron Organization of Levels in Brains Central Nervous sys Interregional circuits Local circuits Neurons Dendrite tree map into cerebral cortex,
More informationMachine Learning. Neural Networks
Machine Learning Neural Networks Bryan Pardo, Northwestern University, Machine Learning EECS 349 Fall 2007 Biological Analogy Bryan Pardo, Northwestern University, Machine Learning EECS 349 Fall 2007 THE
More informationLecture 4: Feed Forward Neural Networks
Lecture 4: Feed Forward Neural Networks Dr. Roman V Belavkin Middlesex University BIS4435 Biological neurons and the brain A Model of A Single Neuron Neurons as data-driven models Neural Networks Training
More informationArtificial Neural Networks. Introduction to Computational Neuroscience Tambet Matiisen
Artificial Neural Networks Introduction to Computational Neuroscience Tambet Matiisen 2.04.2018 Artificial neural network NB! Inspired by biology, not based on biology! Applications Automatic speech recognition
More informationNeural Networks biological neuron artificial neuron 1
Neural Networks biological neuron artificial neuron 1 A two-layer neural network Output layer (activation represents classification) Weighted connections Hidden layer ( internal representation ) Input
More informationData Mining Part 5. Prediction
Data Mining Part 5. Prediction 5.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline How the Brain Works Artificial Neural Networks Simple Computing Elements Feed-Forward Networks Perceptrons (Single-layer,
More informationMachine Learning. Neural Networks. (slides from Domingos, Pardo, others)
Machine Learning Neural Networks (slides from Domingos, Pardo, others) For this week, Reading Chapter 4: Neural Networks (Mitchell, 1997) See Canvas For subsequent weeks: Scaling Learning Algorithms toward
More informationEE04 804(B) Soft Computing Ver. 1.2 Class 2. Neural Networks - I Feb 23, Sasidharan Sreedharan
EE04 804(B) Soft Computing Ver. 1.2 Class 2. Neural Networks - I Feb 23, 2012 Sasidharan Sreedharan www.sasidharan.webs.com 3/1/2012 1 Syllabus Artificial Intelligence Systems- Neural Networks, fuzzy logic,
More informationArtificial Intelligence
Artificial Intelligence Jeff Clune Assistant Professor Evolving Artificial Intelligence Laboratory Announcements Be making progress on your projects! Three Types of Learning Unsupervised Supervised Reinforcement
More informationAI Programming CS F-20 Neural Networks
AI Programming CS662-2008F-20 Neural Networks David Galles Department of Computer Science University of San Francisco 20-0: Symbolic AI Most of this class has been focused on Symbolic AI Focus or symbols
More informationMachine Learning. Neural Networks. (slides from Domingos, Pardo, others)
Machine Learning Neural Networks (slides from Domingos, Pardo, others) For this week, Reading Chapter 4: Neural Networks (Mitchell, 1997) See Canvas For subsequent weeks: Scaling Learning Algorithms toward
More information15 Grossberg Network 1
Grossberg Network Biological Motivation: Vision Bipolar Cell Amacrine Cell Ganglion Cell Optic Nerve Cone Light Lens Rod Horizontal Cell Retina Optic Nerve Fiber Eyeball and Retina Layers of Retina The
More informationInstituto Tecnológico y de Estudios Superiores de Occidente Departamento de Electrónica, Sistemas e Informática. Introductory Notes on Neural Networks
Introductory Notes on Neural Networs Dr. José Ernesto Rayas Sánche April Introductory Notes on Neural Networs Dr. José Ernesto Rayas Sánche BIOLOGICAL NEURAL NETWORKS The brain can be seen as a highly
More informationPV021: Neural networks. Tomáš Brázdil
1 PV021: Neural networks Tomáš Brázdil 2 Course organization Course materials: Main: The lecture Neural Networks and Deep Learning by Michael Nielsen http://neuralnetworksanddeeplearning.com/ (Extremely
More informationNeural Networks for Machine Learning. Lecture 2a An overview of the main types of neural network architecture
Neural Networks for Machine Learning Lecture 2a An overview of the main types of neural network architecture Geoffrey Hinton with Nitish Srivastava Kevin Swersky Feed-forward neural networks These are
More informationCompetitive Learning for Deep Temporal Networks
Competitive Learning for Deep Temporal Networks Robert Gens Computer Science and Engineering University of Washington Seattle, WA 98195 rcg@cs.washington.edu Pedro Domingos Computer Science and Engineering
More informationArtificial Neural Networks D B M G. Data Base and Data Mining Group of Politecnico di Torino. Elena Baralis. Politecnico di Torino
Artificial Neural Networks Data Base and Data Mining Group of Politecnico di Torino Elena Baralis Politecnico di Torino Artificial Neural Networks Inspired to the structure of the human brain Neurons as
More informationAddress for Correspondence
Research Article APPLICATION OF ARTIFICIAL NEURAL NETWORK FOR INTERFERENCE STUDIES OF LOW-RISE BUILDINGS 1 Narayan K*, 2 Gairola A Address for Correspondence 1 Associate Professor, Department of Civil
More informationOn The Equivalence of Hierarchical Temporal Memory and Neural Nets
On The Equivalence of Hierarchical Temporal Memory and Neural Nets Bedeho Mesghina Wolde Mender December 7, 2009 Abstract In this paper we present a rigorous definition of classification in a common family
More informationNeural Nets in PR. Pattern Recognition XII. Michal Haindl. Outline. Neural Nets in PR 2
Neural Nets in PR NM P F Outline Motivation: Pattern Recognition XII human brain study complex cognitive tasks Michal Haindl Faculty of Information Technology, KTI Czech Technical University in Prague
More informationA Biologically-Inspired Model for Recognition of Overlapped Patterns
A Biologically-Inspired Model for Recognition of Overlapped Patterns Mohammad Saifullah Department of Computer and Information Science Linkoping University, Sweden Mohammad.saifullah@liu.se Abstract. In
More informationFundamentals of Computational Neuroscience 2e
Fundamentals of Computational Neuroscience 2e January 1, 2010 Chapter 10: The cognitive brain Hierarchical maps and attentive vision A. Ventral visual pathway B. Layered cortical maps Receptive field size
More informationChapter 9: The Perceptron
Chapter 9: The Perceptron 9.1 INTRODUCTION At this point in the book, we have completed all of the exercises that we are going to do with the James program. These exercises have shown that distributed
More informationComputational Models of Human Cognition
Computational Models of Human Cognition Models A model is a means of representing the structure or workings of a system or object. e.g., model car e.g., economic model e.g., psychophysics model 2500 loudness
More informationBrains and Computation
15-883: Computational Models of Neural Systems Lecture 1.1: Brains and Computation David S. Touretzky Computer Science Department Carnegie Mellon University 1 Models of the Nervous System Hydraulic network
More informationARTIFICIAL INTELLIGENCE. Artificial Neural Networks
INFOB2KI 2017-2018 Utrecht University The Netherlands ARTIFICIAL INTELLIGENCE Artificial Neural Networks Lecturer: Silja Renooij These slides are part of the INFOB2KI Course Notes available from www.cs.uu.nl/docs/vakken/b2ki/schema.html
More informationLecture 4: Perceptrons and Multilayer Perceptrons
Lecture 4: Perceptrons and Multilayer Perceptrons Cognitive Systems II - Machine Learning SS 2005 Part I: Basic Approaches of Concept Learning Perceptrons, Artificial Neuronal Networks Lecture 4: Perceptrons
More informationIntroduction to Neural Networks
Introduction to Neural Networks What are (Artificial) Neural Networks? Models of the brain and nervous system Highly parallel Process information much more like the brain than a serial computer Learning
More informationARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92
ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92 BIOLOGICAL INSPIRATIONS Some numbers The human brain contains about 10 billion nerve cells (neurons) Each neuron is connected to the others through 10000
More informationIntroduction and Perceptron Learning
Artificial Neural Networks Introduction and Perceptron Learning CPSC 565 Winter 2003 Christian Jacob Department of Computer Science University of Calgary Canada CPSC 565 - Winter 2003 - Emergent Computing
More informationHow to do backpropagation in a brain
How to do backpropagation in a brain Geoffrey Hinton Canadian Institute for Advanced Research & University of Toronto & Google Inc. Prelude I will start with three slides explaining a popular type of deep
More informationMotion Perception 1. PSY305 Lecture 12 JV Stone
Motion Perception 1 PSY305 Lecture 12 JV Stone 1 Structure Human visual system as a band-pass filter. Neuronal motion detection, the Reichardt detector. The aperture problem. 2 The visual system is a temporal
More informationDual Nature Hidden Layers Neural Networks A Novel Paradigm of Neural Network Architecture
Dual Nature Hidden Layers Neural Networks A Novel Paradigm of Neural Network Architecture S.Karthikeyan 1, Ravi Prakash 2, B.B.Paul 3 1 Lecturer, Department of Computer Science, Faculty of Science, Banaras
More informationCSE 352 (AI) LECTURE NOTES Professor Anita Wasilewska. NEURAL NETWORKS Learning
CSE 352 (AI) LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS Learning Neural Networks Classifier Short Presentation INPUT: classification data, i.e. it contains an classification (class) attribute.
More informationViewpoint invariant face recognition using independent component analysis and attractor networks
Viewpoint invariant face recognition using independent component analysis and attractor networks Marian Stewart Bartlett University of California San Diego The Salk Institute La Jolla, CA 92037 marni@salk.edu
More informationArtificial Neural Networks. Edward Gatt
Artificial Neural Networks Edward Gatt What are Neural Networks? Models of the brain and nervous system Highly parallel Process information much more like the brain than a serial computer Learning Very
More informationSubcellular Localisation of Proteins in Living Cells Using a Genetic Algorithm and an Incremental Neural Network
Subcellular Localisation of Proteins in Living Cells Using a Genetic Algorithm and an Incremental Neural Network Marko Tscherepanow and Franz Kummert Applied Computer Science, Faculty of Technology, Bielefeld
More informationMemories Associated with Single Neurons and Proximity Matrices
Memories Associated with Single Neurons and Proximity Matrices Subhash Kak Oklahoma State University, Stillwater Abstract: This paper extends the treatment of single-neuron memories obtained by the use
More informationSPSS, University of Texas at Arlington. Topics in Machine Learning-EE 5359 Neural Networks
Topics in Machine Learning-EE 5359 Neural Networks 1 The Perceptron Output: A perceptron is a function that maps D-dimensional vectors to real numbers. For notational convenience, we add a zero-th dimension
More informationCS 4700: Foundations of Artificial Intelligence
CS 4700: Foundations of Artificial Intelligence Prof. Bart Selman selman@cs.cornell.edu Machine Learning: Neural Networks R&N 18.7 Intro & perceptron learning 1 2 Neuron: How the brain works # neurons
More informationArtificial Neural Networks
Artificial Neural Networks CPSC 533 Winter 2 Christian Jacob Neural Networks in the Context of AI Systems Neural Networks as Mediators between Symbolic AI and Statistical Methods 2 5.-NeuralNets-2.nb Neural
More informationUsing Variable Threshold to Increase Capacity in a Feedback Neural Network
Using Variable Threshold to Increase Capacity in a Feedback Neural Network Praveen Kuruvada Abstract: The article presents new results on the use of variable thresholds to increase the capacity of a feedback
More informationNeural Networks and the Back-propagation Algorithm
Neural Networks and the Back-propagation Algorithm Francisco S. Melo In these notes, we provide a brief overview of the main concepts concerning neural networks and the back-propagation algorithm. We closely
More informationOptimal In-Place Self-Organization for Cortical Development: Limited Cells, Sparse Coding and Cortical Topography
Optimal In-Place Self-Organization for Cortical Development: Limited Cells, Sparse Coding and Cortical Topography Juyang Weng and Matthew D. Luciw Department of Computer Science and Engineering Michigan
More informationProtein Structure Prediction Using Multiple Artificial Neural Network Classifier *
Protein Structure Prediction Using Multiple Artificial Neural Network Classifier * Hemashree Bordoloi and Kandarpa Kumar Sarma Abstract. Protein secondary structure prediction is the method of extracting
More informationPart 8: Neural Networks
METU Informatics Institute Min720 Pattern Classification ith Bio-Medical Applications Part 8: Neural Netors - INTRODUCTION: BIOLOGICAL VS. ARTIFICIAL Biological Neural Netors A Neuron: - A nerve cell as
More informationMultilayer Neural Networks. (sometimes called Multilayer Perceptrons or MLPs)
Multilayer Neural Networks (sometimes called Multilayer Perceptrons or MLPs) Linear separability Hyperplane In 2D: w x + w 2 x 2 + w 0 = 0 Feature x 2 = w w 2 x w 0 w 2 Feature 2 A perceptron can separate
More informationCan we synthesize learning? Sérgio Hortas Rodrigues IST, Aprendizagem Simbólica e Sub-Simbólica, Jun 2009
Hierarchical Neural Netw rks Can we synthesize learning? Sérgio Hortas Rodrigues IST, Aprendizagem Simbólica e Sub-Simbólica, Jun 2009 Topics Brain review Artificial Neurons Basic Neural Networks Back
More informationDesign Collocation Neural Network to Solve Singular Perturbed Problems with Initial Conditions
Article International Journal of Modern Engineering Sciences, 204, 3(): 29-38 International Journal of Modern Engineering Sciences Journal homepage:www.modernscientificpress.com/journals/ijmes.aspx ISSN:
More informationCOMP304 Introduction to Neural Networks based on slides by:
COMP34 Introduction to Neural Networks based on slides by: Christian Borgelt http://www.borgelt.net/ Christian Borgelt Introduction to Neural Networks Motivation: Why (Artificial) Neural Networks? (Neuro-)Biology
More informationSections 18.6 and 18.7 Artificial Neural Networks
Sections 18.6 and 18.7 Artificial Neural Networks CS4811 - Artificial Intelligence Nilufer Onder Department of Computer Science Michigan Technological University Outline The brain vs. artifical neural
More informationLast update: October 26, Neural networks. CMSC 421: Section Dana Nau
Last update: October 26, 207 Neural networks CMSC 42: Section 8.7 Dana Nau Outline Applications of neural networks Brains Neural network units Perceptrons Multilayer perceptrons 2 Example Applications
More information18.6 Regression and Classification with Linear Models
18.6 Regression and Classification with Linear Models 352 The hypothesis space of linear functions of continuous-valued inputs has been used for hundreds of years A univariate linear function (a straight
More informationINTRODUCTION TO ARTIFICIAL INTELLIGENCE
v=1 v= 1 v= 1 v= 1 v= 1 v=1 optima 2) 3) 5) 6) 7) 8) 9) 12) 11) 13) INTRDUCTIN T ARTIFICIAL INTELLIGENCE DATA15001 EPISDE 8: NEURAL NETWRKS TDAY S MENU 1. NEURAL CMPUTATIN 2. FEEDFRWARD NETWRKS (PERCEPTRN)
More informationDeep Feedforward Networks. Sargur N. Srihari
Deep Feedforward Networks Sargur N. srihari@cedar.buffalo.edu 1 Topics Overview 1. Example: Learning XOR 2. Gradient-Based Learning 3. Hidden Units 4. Architecture Design 5. Backpropagation and Other Differentiation
More information22c145-Fall 01: Neural Networks. Neural Networks. Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1
Neural Networks Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1 Brains as Computational Devices Brains advantages with respect to digital computers: Massively parallel Fault-tolerant Reliable
More informationArtificial Neural Networks The Introduction
Artificial Neural Networks The Introduction 01001110 01100101 01110101 01110010 01101111 01101110 01101111 01110110 01100001 00100000 01110011 01101011 01110101 01110000 01101001 01101110 01100001 00100000
More informationNeural Networks and Fuzzy Logic Rajendra Dept.of CSE ASCET
Unit-. Definition Neural network is a massively parallel distributed processing system, made of highly inter-connected neural computing elements that have the ability to learn and thereby acquire knowledge
More informationDeep Feedforward Networks
Deep Feedforward Networks Yongjin Park 1 Goal of Feedforward Networks Deep Feedforward Networks are also called as Feedforward neural networks or Multilayer Perceptrons Their Goal: approximate some function
More informationSGD and Deep Learning
SGD and Deep Learning Subgradients Lets make the gradient cheating more formal. Recall that the gradient is the slope of the tangent. f(w 1 )+rf(w 1 ) (w w 1 ) Non differentiable case? w 1 Subgradients
More informationNeural Networks. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington
Neural Networks CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington 1 Perceptrons x 0 = 1 x 1 x 2 z = h w T x Output: z x D A perceptron
More informationMid Year Project Report: Statistical models of visual neurons
Mid Year Project Report: Statistical models of visual neurons Anna Sotnikova asotniko@math.umd.edu Project Advisor: Prof. Daniel A. Butts dab@umd.edu Department of Biology Abstract Studying visual neurons
More informationIntroduction to Deep Learning
Introduction to Deep Learning Some slides and images are taken from: David Wolfe Corne Wikipedia Geoffrey A. Hinton https://www.macs.hw.ac.uk/~dwcorne/teaching/introdl.ppt Feedforward networks for function
More informationMultilayer Neural Networks. (sometimes called Multilayer Perceptrons or MLPs)
Multilayer Neural Networks (sometimes called Multilayer Perceptrons or MLPs) Linear separability Hyperplane In 2D: w 1 x 1 + w 2 x 2 + w 0 = 0 Feature 1 x 2 = w 1 w 2 x 1 w 0 w 2 Feature 2 A perceptron
More informationThunderstorm Forecasting by using Artificial Neural Network
Thunderstorm Forecasting by using Artificial Neural Network N.F Nik Ismail, D. Johari, A.F Ali, Faculty of Electrical Engineering Universiti Teknologi MARA 40450 Shah Alam Malaysia nikfasdi@yahoo.com.my
More informationCISC 3250 Systems Neuroscience
CISC 3250 Systems Neuroscience Systems Neuroscience How the nervous system performs computations How groups of neurons work together to achieve intelligence Professor Daniel Leeds dleeds@fordham.edu JMH
More informationNeural Networks and Ensemble Methods for Classification
Neural Networks and Ensemble Methods for Classification NEURAL NETWORKS 2 Neural Networks A neural network is a set of connected input/output units (neurons) where each connection has a weight associated
More informationAn Introductory Course in Computational Neuroscience
An Introductory Course in Computational Neuroscience Contents Series Foreword Acknowledgments Preface 1 Preliminary Material 1.1. Introduction 1.1.1 The Cell, the Circuit, and the Brain 1.1.2 Physics of
More informationSimple Neural Nets For Pattern Classification
CHAPTER 2 Simple Neural Nets For Pattern Classification Neural Networks General Discussion One of the simplest tasks that neural nets can be trained to perform is pattern classification. In pattern classification
More informationSystem Identification for the Hodgkin-Huxley Model using Artificial Neural Networks
Proceedings of International Joint Conference on Neural Networks, Orlando, Florida, USA, August 12-17, 2007 System Identification for the Hodgkin-Huxley Model using Artificial Neural Networks Manish Saggar,
More informationHow to read a burst duration code
Neurocomputing 58 60 (2004) 1 6 www.elsevier.com/locate/neucom How to read a burst duration code Adam Kepecs a;, John Lisman b a Cold Spring Harbor Laboratory, Marks Building, 1 Bungtown Road, Cold Spring
More informationIn biological terms, memory refers to the ability of neural systems to store activity patterns and later recall them when required.
In biological terms, memory refers to the ability of neural systems to store activity patterns and later recall them when required. In humans, association is known to be a prominent feature of memory.
More informationUnit III. A Survey of Neural Network Model
Unit III A Survey of Neural Network Model 1 Single Layer Perceptron Perceptron the first adaptive network architecture was invented by Frank Rosenblatt in 1957. It can be used for the classification of
More informationNeural Networks 1 Synchronization in Spiking Neural Networks
CS 790R Seminar Modeling & Simulation Neural Networks 1 Synchronization in Spiking Neural Networks René Doursat Department of Computer Science & Engineering University of Nevada, Reno Spring 2006 Synchronization
More informationSections 18.6 and 18.7 Artificial Neural Networks
Sections 18.6 and 18.7 Artificial Neural Networks CS4811 - Artificial Intelligence Nilufer Onder Department of Computer Science Michigan Technological University Outline The brain vs artifical neural networks
More informationLearning and Memory in Neural Networks
Learning and Memory in Neural Networks Guy Billings, Neuroinformatics Doctoral Training Centre, The School of Informatics, The University of Edinburgh, UK. Neural networks consist of computational units
More informationCS:4420 Artificial Intelligence
CS:4420 Artificial Intelligence Spring 2018 Neural Networks Cesare Tinelli The University of Iowa Copyright 2004 18, Cesare Tinelli and Stuart Russell a a These notes were originally developed by Stuart
More informationGrundlagen der Künstlichen Intelligenz
Grundlagen der Künstlichen Intelligenz Neural networks Daniel Hennes 21.01.2018 (WS 2017/18) University Stuttgart - IPVS - Machine Learning & Robotics 1 Today Logistic regression Neural networks Perceptron
More informationBASIC VISUAL SCIENCE CORE
BASIC VISUAL SCIENCE CORE Absolute and Increment Thresholds Ronald S. Harwerth Fall, 2016 1. Psychophysics of Vision 2. Light and Dark Adaptation Michael Kalloniatis and Charles Luu 1 The Neuron Doctrine
More informationKeywords- Source coding, Huffman encoding, Artificial neural network, Multilayer perceptron, Backpropagation algorithm
Volume 4, Issue 5, May 2014 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Huffman Encoding
More informationMemory Capacity of Linear vs. Nonlinear Models of Dendritic Integration
Memory Capacity of Linear vs. Nonlinear Models of Dendritic Integration Panayiota Poirazi* Biomedical Engineering Department University of Southern California Los Angeles, CA 90089 poirazi@sc/. usc. edu
More informationLecture 5: Logistic Regression. Neural Networks
Lecture 5: Logistic Regression. Neural Networks Logistic regression Comparison with generative models Feed-forward neural networks Backpropagation Tricks for training neural networks COMP-652, Lecture
More information