Short Communication EVOLUTIONARY NEURAL GAS: A SCALE-FREE SELF- ORGANIZING NEURAL NET WITHOUT TOPOLOGICAL CONSTRAINT
|
|
- Lynne Shields
- 5 years ago
- Views:
Transcription
1 Short Communication EVOLUTIONARY NEURAL GAS: A SCALE-FREE SELF- ORGANIZING NEURAL NET WITHOUT TOPOLOGICAL CONSTRAINT Luigi Lella 1, Ignazio Licata 1) Università Politecnica delle Marche, D.E.I.T., via Brecce Bianche,6131, Ancona, Italy l.lella@inform.unian.it. ) Istituto di Cibernetica Non-Lineare per lo Studio dei Sistemi Complessi, via Favorita 9,915 Marsala (TP), Italy licata@programmazione.it Introduction The self-organizing nets are particularly fit to carry out cluster analysis of non-hierarchic type, by using competitive learning strategies. It makes them the ideal model for the categorizing processes in natural and artificial intelligent systems. The most patent limits of such models lie in making use of deterministic rules and dimensional constraints (1). In observing the evolutive processes, we can see that the system-environment interaction does not let us foresee beforehand what structural organization the net will take, on the contrary it is emerging during the process and often varies along time. These requirements have stimulated some alternative approaches, see the TRN () or the GNG (3), but they all are based upon deterministic rules, which thing leads to the coming out of topologies whose final structure is a Delaunay Triangulation (4). From a biological viewpoint, this aspect is scarcely plausible because, in some way, it pilots the input categorization towards simple schemes whereas the inputs themselves are, in nature, a not well-defined, noise affected and highly aleatory set, so the emerging structures are generally less rigid, endowed with wide local connectivity but quite extended with respect to the clustering nodes. We delineate here a model without topological constraints led by simple probabilistic rules. The net is considered as a population of nodes where the main events conditioning its evolution, such as the creation or elimination of links and units, depend on the amount of the local and global available resources. In the model defined during the training phase, which is the assimilation of inputs, it has been adopted a winner takes all strategy (5). The links of the stronger units are not directly reinforced, when there are low resources the life probability of the weak nodes simply decays. In so doing it comes out a scale-free graph which is a mark, strongly significant from the viewpoint of physics, of self-organizing processes and information amplifying. It is known that this kind of graph can be found in many natural and artificial systems endowed with logical and thermodynamic openness (6, 7). We will see then that the above mentioned structure justifies in accordance with physics the used term of gas. 43
2 Evolutionary Algorithm The net nodes can be considered as individuals of a population living in an ecosystem. The population surviving is granted by two kinds of resources: one is global, given by the available space the net can get M P units at most -, and the other ones are local, given by the distortion errors D equal to the distances between the input vectors and the vectors (centres or weight vectors) associated to the net units closest to them (the winner units). The individuals are divided in groups developing around the winners, but they can also establish a link with other groups. The interaction modalities among individuals depend on the amount of available resources. When there is scarcity of resources the individuals show a tendency to compete each other, when there is abundance of resources the individuals show a tendency to reproduce. The evolution stops when it is reached the minimization of the expected quantization error D the average of the distances between the centres of the winners and the corresponding input vectors [1] - which represents an high modelizing level of the inputs and, consequently, a good adaptive outcome. K! D = 1 x i " w K j [1] i= 1 Each net training step, corresponding to the introducing of the succession of the input signals (an epoch), can be divided in three phase. Winner units selecting (a) They are selected the units getting the centres closest to the introduced inputs (WTA strategy). So the winners are the units closest to the local resources and, consequently, they are the strongest population units showing a tendency to stay in their own zone and no need to establish links with other groups. Centres updating (b) During this phase they are updated both the centres of the winners and the ones connected to them. The centres of the winners shift towards the corresponding inputs. The shift is equal to a fraction of the distance (difference vector) separating the centres of the winners from the corresponding inputs. The centres of the units linked to the winners are modified too, but we have now a fraction minor than the distance which separates them from the inputs. Lets put x the input corresponding to the winner characterized by w centre, and w ij the centres of the units linked to the winners: w( t + 1) = w( t) + #( x! w( t)) wi ( t + 1) = wi ( t) + " ( x! w ( t)) i [] Each unit is characterized not only by a centre, but also by a variable d representing the quadratic distance from the closest local resource. Such variable value mirrors the individual weakness. The smaller the variable value, the greater the individual surviving possibilities. At each evolutionary step, the value of this variable is set to its maximum. After each updating of the centre w of a unit, which happens when a given input x is introduced, it is 44
3 calculated the quadratic distance between the two vectors ( x-w ). If the quadratic distance is less than the node weakness, it will become its new value. Population evolutionary phase (c) The population of the net nodes evolves by producing new descendants, establishing new connections and eliminating the weaker unities. How it is shown in the figure [1], all these events are characterized by a probability depending on the availability of the system resources. Each unit i, i = [1 N(t)] where N(t) is the actual net dimension (total number of units), can meet the closest winner j with probability P m. If the meeting takes place the two units establish a link and can interact by reproducing with probability P r. In this case two new units are created and their centres will be: wi + w j wi + w 1 = wi + w j w j + w = [3] Fig. 1 - Algorithm Evolutionary Phase 45
4 If, due to the lack of resources, reproduction does not take place the weakest unit of the population, i.e. the one with the highest debility, is removed. If the i unit does not meet any winner, it can interact with its closest unit k with probability P r so producing a new unit whose centre is the following: wi + wk w = [4] if we fix a maximum population size, the ratio between N(t)/N max can be seen as a measure of the ecosystem global resources. For example, if the population size is low, the reproduction rate will be high. So we can reasonably put P r = 1 - N(t)/N max. On the contrary, if the population size is high, the higer will be the possibility of connection between two individuals, so we can put P m = N(t)/N max. We can also take into consideration a local resource linked to the expected quantization error: each unit i could meet a winner with a probability given by P = N( t) / N )(1! D / d ), and P r = 1 - P m, where D min is the average we aim to reach. m ( max min i Obviously, it can happen that the weakness d of the node taken into consideration is inferior than D min, so we have to set: D min /d < 1. The dynamic course of the population size can be studied by two models. The first one takes into consideration only the global resources, the second one takes into account the local resources according to the way they have been defined: ( 1) ( ) m r ( ) m ( 1 r ) ( ) ( 1 m) ( ) ( 1 m) d ( ) N P N ( t) N t + = N t + P P N t! P! P N t +! P N t!! P P N t = =! = t m ( ) " # ( ) ( ) N t = N ( t) 1! $ X ( t + 1) = X ( t) 1! X ( t) ( first model) % M & ' p ( " N t " Dmin # # " " Dmin # # = N ( t) 1! 1! $ X ( t + 1) = X ( t) 1! X ( t) 1! ( se cond model) % % & % & M p ' D ( & % ' D ( & ' ( ' ( [5] where X(t) is the normalized ratio N(t)/ N max. Except for the factor (1- D min /D), the formula reminds the one of the quadratic logistic map by Annunziato and Pizzuti (8) : X t + 1 = ax t 1! X t [6] ( ) ( ) ( ) ( ) The outcome is in concordance with the premises. In fact, the [5] describes an evolutionary history where the self-organization of the initial growing process takes place and it is consequently fallowed saturation, which is linked not only to the global resources but also both to the peculiar distribution within the net of the winners and the configuration of various strength units in the neighbourhood. Annunziato and Pizzuti proved that at the parameter varying different regimes arise. For a < 1.7 there is no chaotic behaviour and we have a simple attractor. For the interval between 1.7< a <.1 there arise chaotic regimes with a sequence of attractors localized in different zones of the phase space. 46
5 Simulations We have compared the performances of the ENG with the GNG ones in categorizing bidimensional inputs which are uniformly distributed on two different regions. In the former case, the inputs are localized within four square regions; in the latter one, inputs are in a ring region. As stopping criterion we have chosen the minimization of D (the D min threshold size is equal to ). For the GNG the formulae parameters for the updating of the centres are α=.5, β=.5, at each λ =3 steps a new unit is introduced and the maximum age of links is equal to 88. For the two models of ENG the formulae parameters for the centres updating are α=.5, β=.6 and the maximum size has be chosen to be equal to N max =1. How it is shown in the Figure 1, after the training the GNG vectors are all placed within the input domain, which is to say that the net tends to follow the exact topology of the input signals. On the contrary, in the ENG some units fall outside the input domain, but the net remains connected by few hubs which give it a scale-free graph structure (Fig. ). The net structural parameters appear actually to be the typical ones of a scale-free graph. In fact, the low average free path characterizes the net micro-world structure, the high clustering coefficient shows the presence of considerable aggregations of net units and the distribution of probabilities (links amount) of the nodes s displays a really slow decaying tail, which is to say there exists a restricted amount of nodes establishing much more links than the average. Fig.1 - Growing Neural Gas simulations 47
6 Fig. a - ENG simulations (first model) Fig. b - ENG simulations (second model) How it is shown in the 3 and 4 figures, in the GNG the maximum that a node can take is about 5, whereas in the ENG we can have nodes establishing more than 8 links. 48
7 y = 37,74x -, y = 68,961x -, Fig. 3 Average distribution in GNG (two different input manifolds) y = 3,75x -1, y =,69x -1, Fig. 4a Average distribution in ENG (first model, two different input manifolds) y = 4,415x -1, y = 1,98x -1, Fig. 4b Average distribution in ENG (second model, two different input manifolds) In the 1 and tables instead we have the average value of the two nets structural parameters. We got such value by averaging the values of the parameters relative to 3 different nets of the same type and dimension, which has been trained by the same inputs. The GNG get high average path length and low clustering coefficient, the ENG show short average path length and high clustering coefficient linked to the power law ruling the distribution, that s a confirmation of the scale-free features. 49
8 Tab. 1 Comparison of structural parameters (average values, first input manifold) Average path length Clustering coefficient Power law exponent GNG ENG (1st) ENG (nd) Tab. - Comparison of structural parameters (average values, second input manifold) Average path length Clustering coefficient Power law exponent GNG ENG (1st) ENG (nd) The two ENG models share the same structure, in both models they are the winners which create the most of links. They are the privileged units by which each node try to create a link. By making the probabilities also depending on the local D error (Second ENG Model), we obtain a structure more GNG-like, i.e. more gas-like. In fact the conditions to create a new link get much more restrictive, which thing consequently makes the interaction between any of the subset of units and the rest of the net decreasing. So the structure of links seems to extend more uniformly along the region where the inputs are defined, how it is shown in the b figure (more patent in the ring distribution of inputs). In the 5 figure it is represented the dynamics of the populations of the two ENG models. In the first model, the population size seems to converge at the final value equal to.7n max, so confirming Annunziato and Pizzuti experimental outcomes. 1 1 network size network size epochs epochs Fig. 5 Network size evolution of the two ENG models (first input manifold) Considering that d tends to gradually decrease during the training, how fig. 6 points out, the influence of the (1-D min /d) factor tends instead to increase so reducing the effects of the negative effects of the [5] and [6] formulae. It justifies the sudden population increase at the training final steps in the second model. 5
9 D D epochs epochs Fig. 6 Average D error of the two ENG models (first input manifold) The sudden increase of units can be mainly recorded around the winners (which get a low d). It means that at the final steps of training new units and links keep on growing around the winners, but not among the subgroups of units which became more and more isolated. If we visualize the phase plane (X(t),X(t+1)), we can immediately notice that the punctiform characteristics of the attractor become more marked in the second model. It means that the system tends to mostly converge at a fixed final state. Such behaviour is just due to the scarce interaction among the groups of the net units at the final stages of training. Fig. 7 Population dynamics (X(t),X(t+1)) of the two ENG models (first input manifold). 51
10 Conclusions : a physical picture Future Developments The algorithm here developed essentially represents a process of experimental selection analogous with the selection phase in Edelman s model (9, 1). The population evolution of nodes is led by an evolutive strategy which does not use the classical learning rules such as the Hebb law (11). The adopting of a discrete probabilistic model finds its motive in the recent experimental studies showing how groups of neuronal units get wide functional versatility in responding to different stimuli and, vice versa, same stimuli give rise to extremely diversified responses. It points out that a certain random component of the cerebral activity is laid on a much more radical level than noise and it need a class of models different from the deterministic traditional - linear and non ones (1). The ENG algorithm lets groups of units to form, but does not integrate and synchronize their activity. So the system behaves like a gas made by neuronal molecules weakly interacting each others. The problem of adopting a winner-takes-all strategy is that it is so favoured a process of information localizing, which is totally different from the Edelman dynamic nucleus hypothesis. A dynamic nucleus is a process, not a specified net element and it is defined by neural interactions rather than by specific localization. In the ENG model is quite easy to get the formation of the dynamic nuclei by connecting selected winners by the diffusion on the activation signal. Such solution favours the synchronization of the activities of the groups of the net units. In consonance with the gas metaphor, we can say that the activation signal crystallizes the system around the original scale-free structure, increasing the co-operative integration and the neural complexity essential to the autopoietic large-scale processes of the neural activity. The authors thank their old pal mousy for the linguistic revision. References 1. Kohonen T. Self-organized Formation of Topologically Correct Feature Maps, Biological Cybernetics, 198;43: Martinetz TM, Schultzen KJ. Topology Representing Networks, Neural Networks, 1994:7(3); Fritzke B. Growing Cell Structures A Self-Organizing Network for Unsupervised and Supervised Learning, Neural Networks, 1994;7(9): Delaunay B. Bullettin of the Academy of Sciences USSR, 1934;VII: Chialvo DR, Bak P. Learning From Mistakes, Neuroscience, 1999;9(4): Steyvers M, Tenenbaum J. The Large-Scale Structure of Semantic Networks. Working draft submitted to Cognitive Science, 1 7. Albert R, Barabasi A. Topology of Evolving Networks: Local Events and Universality, Physical Review Letters, ;85: Annunziato M, Pizzuti S. Adaptive Parametrization of Evolutionary Algorithms Driven by Reproduction and Competition, Proceedings of ESIT, Aachen, Germany. 9. Edelman GM. Group Selection and Phasic Reentrant Signaling: A Theory of Higher Brain Function, in The Mindful Brain (eds Edelman G.M. and Mountcastle V.), Cambridge: MIT Press;1978: Edelman GM. Neural Darwinism: The Theory of Neuronal Group Selection, New York: Basic Books;1987 5
11 11. Martinetz TM. Competitive Hebbian Learning Rule Forms Perfectly Topology Preserving Maps, ICANN 93: International Conference on Artificial Neural Networks, Amsterdam: Springer; 1993; Zbilut JP. Unstable Singularities and Randomness, Elsevier Books; 4 53
2- AUTOASSOCIATIVE NET - The feedforward autoassociative net considered in this section is a special case of the heteroassociative net.
2- AUTOASSOCIATIVE NET - The feedforward autoassociative net considered in this section is a special case of the heteroassociative net. - For an autoassociative net, the training input and target output
More informationArtificial Neural Networks. Edward Gatt
Artificial Neural Networks Edward Gatt What are Neural Networks? Models of the brain and nervous system Highly parallel Process information much more like the brain than a serial computer Learning Very
More informationEffects of Interactive Function Forms in a Self-Organized Critical Model Based on Neural Networks
Commun. Theor. Phys. (Beijing, China) 40 (2003) pp. 607 613 c International Academic Publishers Vol. 40, No. 5, November 15, 2003 Effects of Interactive Function Forms in a Self-Organized Critical Model
More informationIngo Ahrns, Jorg Bruske, Gerald Sommer. Christian Albrechts University of Kiel - Cognitive Systems Group. Preusserstr Kiel - Germany
On-line Learning with Dynamic Cell Structures Ingo Ahrns, Jorg Bruske, Gerald Sommer Christian Albrechts University of Kiel - Cognitive Systems Group Preusserstr. 1-9 - 24118 Kiel - Germany Phone: ++49
More information7 Rate-Based Recurrent Networks of Threshold Neurons: Basis for Associative Memory
Physics 178/278 - David Kleinfeld - Fall 2005; Revised for Winter 2017 7 Rate-Based Recurrent etworks of Threshold eurons: Basis for Associative Memory 7.1 A recurrent network with threshold elements The
More informationArtificial Intelligence
Artificial Intelligence Jeff Clune Assistant Professor Evolving Artificial Intelligence Laboratory Announcements Be making progress on your projects! Three Types of Learning Unsupervised Supervised Reinforcement
More informationIn biological terms, memory refers to the ability of neural systems to store activity patterns and later recall them when required.
In biological terms, memory refers to the ability of neural systems to store activity patterns and later recall them when required. In humans, association is known to be a prominent feature of memory.
More informationLyapunov Stability Analysis of the Quantization Error for DCS Neural Networks
Lyapunov Stability Analysis of the Quantization Error for DCS Neural Networks Sampath Yerramalla, Bojan Cukic Lane Department of Computer Science and Electrical Engineering West Virginia University Morgantown
More informationIntroduction to Neural Networks
Introduction to Neural Networks What are (Artificial) Neural Networks? Models of the brain and nervous system Highly parallel Process information much more like the brain than a serial computer Learning
More informationÂngelo Cardoso 27 May, Symbolic and Sub-Symbolic Learning Course Instituto Superior Técnico
BIOLOGICALLY INSPIRED COMPUTER MODELS FOR VISUAL RECOGNITION Ângelo Cardoso 27 May, 2010 Symbolic and Sub-Symbolic Learning Course Instituto Superior Técnico Index Human Vision Retinal Ganglion Cells Simple
More informationArtificial Neural Networks Examination, March 2004
Artificial Neural Networks Examination, March 2004 Instructions There are SIXTY questions (worth up to 60 marks). The exam mark (maximum 60) will be added to the mark obtained in the laborations (maximum
More information18.6 Regression and Classification with Linear Models
18.6 Regression and Classification with Linear Models 352 The hypothesis space of linear functions of continuous-valued inputs has been used for hundreds of years A univariate linear function (a straight
More informationSelf-organized criticality and the self-organizing map
PHYSICAL REVIEW E, VOLUME 63, 036130 Self-organized criticality and the self-organizing map John A. Flanagan Neural Networks Research Center, Helsinki University of Technology, P.O. Box 5400, FIN-02015
More informationUsing a Hopfield Network: A Nuts and Bolts Approach
Using a Hopfield Network: A Nuts and Bolts Approach November 4, 2013 Gershon Wolfe, Ph.D. Hopfield Model as Applied to Classification Hopfield network Training the network Updating nodes Sequencing of
More informationEEE 241: Linear Systems
EEE 4: Linear Systems Summary # 3: Introduction to artificial neural networks DISTRIBUTED REPRESENTATION An ANN consists of simple processing units communicating with each other. The basic elements of
More informationDynamical Systems and Deep Learning: Overview. Abbas Edalat
Dynamical Systems and Deep Learning: Overview Abbas Edalat Dynamical Systems The notion of a dynamical system includes the following: A phase or state space, which may be continuous, e.g. the real line,
More informationState Space Vectors On Directionality Tools in Self-organizing Systems
State Space Vectors On Directionality Tools in Self-organizing Systems Joel Palmius joel.palmius@itk.mh.se Department of Informatics (ITK), Mid Sweden University Abstract As the complexity of artificial
More information7 Recurrent Networks of Threshold (Binary) Neurons: Basis for Associative Memory
Physics 178/278 - David Kleinfeld - Winter 2019 7 Recurrent etworks of Threshold (Binary) eurons: Basis for Associative Memory 7.1 The network The basic challenge in associative networks, also referred
More informationNeural Networks. Fundamentals Framework for distributed processing Network topologies Training of ANN s Notation Perceptron Back Propagation
Neural Networks Fundamentals Framework for distributed processing Network topologies Training of ANN s Notation Perceptron Back Propagation Neural Networks Historical Perspective A first wave of interest
More informationESANN'2001 proceedings - European Symposium on Artificial Neural Networks Bruges (Belgium), April 2001, D-Facto public., ISBN ,
Relevance determination in learning vector quantization Thorsten Bojer, Barbara Hammer, Daniel Schunk, and Katharina Tluk von Toschanowitz University of Osnabrück, Department of Mathematics/ Computer Science,
More informationSimple Neural Nets For Pattern Classification
CHAPTER 2 Simple Neural Nets For Pattern Classification Neural Networks General Discussion One of the simplest tasks that neural nets can be trained to perform is pattern classification. In pattern classification
More informationLearning and Memory in Neural Networks
Learning and Memory in Neural Networks Guy Billings, Neuroinformatics Doctoral Training Centre, The School of Informatics, The University of Edinburgh, UK. Neural networks consist of computational units
More informationCHALMERS, GÖTEBORGS UNIVERSITET. EXAM for ARTIFICIAL NEURAL NETWORKS. COURSE CODES: FFR 135, FIM 720 GU, PhD
CHALMERS, GÖTEBORGS UNIVERSITET EXAM for ARTIFICIAL NEURAL NETWORKS COURSE CODES: FFR 135, FIM 72 GU, PhD Time: Place: Teachers: Allowed material: Not allowed: October 23, 217, at 8 3 12 3 Lindholmen-salar
More informationSpiking Neural P Systems and Modularization of Complex Networks from Cortical Neural Network to Social Networks
Spiking Neural P Systems and Modularization of Complex Networks from Cortical Neural Network to Social Networks Adam Obtu lowicz Institute of Mathematics, Polish Academy of Sciences Śniadeckich 8, P.O.B.
More informationAdministration. Registration Hw3 is out. Lecture Captioning (Extra-Credit) Scribing lectures. Questions. Due on Thursday 10/6
Administration Registration Hw3 is out Due on Thursday 10/6 Questions Lecture Captioning (Extra-Credit) Look at Piazza for details Scribing lectures With pay; come talk to me/send email. 1 Projects Projects
More informationArtificial Neural Network : Training
Artificial Neural Networ : Training Debasis Samanta IIT Kharagpur debasis.samanta.iitgp@gmail.com 06.04.2018 Debasis Samanta (IIT Kharagpur) Soft Computing Applications 06.04.2018 1 / 49 Learning of neural
More informationAnalysis of Interest Rate Curves Clustering Using Self-Organising Maps
Analysis of Interest Rate Curves Clustering Using Self-Organising Maps M. Kanevski (1), V. Timonin (1), A. Pozdnoukhov(1), M. Maignan (1,2) (1) Institute of Geomatics and Analysis of Risk (IGAR), University
More informationIntroduction To Artificial Neural Networks
Introduction To Artificial Neural Networks Machine Learning Supervised circle square circle square Unsupervised group these into two categories Supervised Machine Learning Supervised Machine Learning Supervised
More informationArtificial Neural Network and Fuzzy Logic
Artificial Neural Network and Fuzzy Logic 1 Syllabus 2 Syllabus 3 Books 1. Artificial Neural Networks by B. Yagnanarayan, PHI - (Cover Topologies part of unit 1 and All part of Unit 2) 2. Neural Networks
More informationSupervised (BPL) verses Hybrid (RBF) Learning. By: Shahed Shahir
Supervised (BPL) verses Hybrid (RBF) Learning By: Shahed Shahir 1 Outline I. Introduction II. Supervised Learning III. Hybrid Learning IV. BPL Verses RBF V. Supervised verses Hybrid learning VI. Conclusion
More informationTwo Decades of Search for Chaos in Brain.
Two Decades of Search for Chaos in Brain. A. Krakovská Inst. of Measurement Science, Slovak Academy of Sciences, Bratislava, Slovak Republic, Email: krakovska@savba.sk Abstract. A short review of applications
More informationMatching the dimensionality of maps with that of the data
Matching the dimensionality of maps with that of the data COLIN FYFE Applied Computational Intelligence Research Unit, The University of Paisley, Paisley, PA 2BE SCOTLAND. Abstract Topographic maps are
More informationNeural Networks Based on Competition
Neural Networks Based on Competition In some examples of pattern classification we encountered a situation in which the net was trained to classify the input signal into one of the output categories, while
More informationNeural Networks. Mark van Rossum. January 15, School of Informatics, University of Edinburgh 1 / 28
1 / 28 Neural Networks Mark van Rossum School of Informatics, University of Edinburgh January 15, 2018 2 / 28 Goals: Understand how (recurrent) networks behave Find a way to teach networks to do a certain
More informationUniversity of Genova - DITEN. Smart Patrolling. video and SIgnal Processing for Telecommunications ISIP40
University of Genova - DITEN Smart Patrolling 1 Smart Patrolling Detection of the intruder Tracking of the intruder A cognitive node will active an operator, describing on his mobile terminal the characteristic
More informationCausality and communities in neural networks
Causality and communities in neural networks Leonardo Angelini, Daniele Marinazzo, Mario Pellicoro, Sebastiano Stramaglia TIRES-Center for Signal Detection and Processing - Università di Bari, Bari, Italy
More informationNonlinear Dynamical Behavior in BS Evolution Model Based on Small-World Network Added with Nonlinear Preference
Commun. Theor. Phys. (Beijing, China) 48 (2007) pp. 137 142 c International Academic Publishers Vol. 48, No. 1, July 15, 2007 Nonlinear Dynamical Behavior in BS Evolution Model Based on Small-World Network
More informationCS 4700: Foundations of Artificial Intelligence
CS 4700: Foundations of Artificial Intelligence Prof. Bart Selman selman@cs.cornell.edu Machine Learning: Neural Networks R&N 18.7 Intro & perceptron learning 1 2 Neuron: How the brain works # neurons
More informationCHAPTER 3. Pattern Association. Neural Networks
CHAPTER 3 Pattern Association Neural Networks Pattern Association learning is the process of forming associations between related patterns. The patterns we associate together may be of the same type or
More informationNeural Network to Control Output of Hidden Node According to Input Patterns
American Journal of Intelligent Systems 24, 4(5): 96-23 DOI:.5923/j.ajis.2445.2 Neural Network to Control Output of Hidden Node According to Input Patterns Takafumi Sasakawa, Jun Sawamoto 2,*, Hidekazu
More informationIntroduction to Artificial Neural Networks
Facultés Universitaires Notre-Dame de la Paix 27 March 2007 Outline 1 Introduction 2 Fundamentals Biological neuron Artificial neuron Artificial Neural Network Outline 3 Single-layer ANN Perceptron Adaline
More informationArtificial Neural Network
Artificial Neural Network Contents 2 What is ANN? Biological Neuron Structure of Neuron Types of Neuron Models of Neuron Analogy with human NN Perceptron OCR Multilayer Neural Network Back propagation
More informationArtificial Neural Networks Examination, June 2005
Artificial Neural Networks Examination, June 2005 Instructions There are SIXTY questions. (The pass mark is 30 out of 60). For each question, please select a maximum of ONE of the given answers (either
More informationMachine Learning. Neural Networks
Machine Learning Neural Networks Bryan Pardo, Northwestern University, Machine Learning EECS 349 Fall 2007 Biological Analogy Bryan Pardo, Northwestern University, Machine Learning EECS 349 Fall 2007 THE
More informationFixed Weight Competitive Nets: Hamming Net
POLYTECHNIC UNIVERSITY Department of Computer and Information Science Fixed Weight Competitive Nets: Hamming Net K. Ming Leung Abstract: A fixed weight competitive net known as the Hamming net is discussed.
More informationLearning Vector Quantization
Learning Vector Quantization Neural Computation : Lecture 18 John A. Bullinaria, 2015 1. SOM Architecture and Algorithm 2. Vector Quantization 3. The Encoder-Decoder Model 4. Generalized Lloyd Algorithms
More informationNeural networks (not in book)
(not in book) Another approach to classification is neural networks. were developed in the 1980s as a way to model how learning occurs in the brain. There was therefore wide interest in neural networks
More informationAP Curriculum Framework with Learning Objectives
Big Ideas Big Idea 1: The process of evolution drives the diversity and unity of life. AP Curriculum Framework with Learning Objectives Understanding 1.A: Change in the genetic makeup of a population over
More informationCellular Automata. ,C ) (t ) ,..., C i +[ K / 2] Cellular Automata. x > N : C x ! N. = C x. x < 1: C x. = C N+ x.
and beyond Lindenmayer Systems The World of Simple Programs Christian Jacob Department of Computer Science Department of Biochemistry & Molecular Biology University of Calgary CPSC 673 Winter 2004 Random
More informationBack-propagation as reinforcement in prediction tasks
Back-propagation as reinforcement in prediction tasks André Grüning Cognitive Neuroscience Sector S.I.S.S.A. via Beirut 4 34014 Trieste Italy gruening@sissa.it Abstract. The back-propagation (BP) training
More informationChapter 1 Biology: Exploring Life
Chapter 1 Biology: Exploring Life PowerPoint Lectures for Campbell Biology: Concepts & Connections, Seventh Edition Reece, Taylor, Simon, and Dickey Lecture by Edward J. Zalisko Figure 1.0_1 Chapter 1:
More informationMachine Learning. Neural Networks. (slides from Domingos, Pardo, others)
Machine Learning Neural Networks (slides from Domingos, Pardo, others) Human Brain Neurons Input-Output Transformation Input Spikes Output Spike Spike (= a brief pulse) (Excitatory Post-Synaptic Potential)
More information22c145-Fall 01: Neural Networks. Neural Networks. Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1
Neural Networks Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1 Brains as Computational Devices Brains advantages with respect to digital computers: Massively parallel Fault-tolerant Reliable
More informationModels of Language Evolution
Models of Matilde Marcolli CS101: Mathematical and Computational Linguistics Winter 2015 Main Reference Partha Niyogi, The computational nature of language learning and evolution, MIT Press, 2006. From
More informationDoes the Wake-sleep Algorithm Produce Good Density Estimators?
Does the Wake-sleep Algorithm Produce Good Density Estimators? Brendan J. Frey, Geoffrey E. Hinton Peter Dayan Department of Computer Science Department of Brain and Cognitive Sciences University of Toronto
More informationNeural Networks Introduction
Neural Networks Introduction H.A Talebi Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Winter 2011 H. A. Talebi, Farzaneh Abdollahi Neural Networks 1/22 Biological
More informationMultivariate class labeling in Robust Soft LVQ
Multivariate class labeling in Robust Soft LVQ Petra Schneider, Tina Geweniger 2, Frank-Michael Schleif 3, Michael Biehl 4 and Thomas Villmann 2 - School of Clinical and Experimental Medicine - University
More informationNeural Networks. Nethra Sambamoorthi, Ph.D. Jan CRMportals Inc., Nethra Sambamoorthi, Ph.D. Phone:
Neural Networks Nethra Sambamoorthi, Ph.D Jan 2003 CRMportals Inc., Nethra Sambamoorthi, Ph.D Phone: 732-972-8969 Nethra@crmportals.com What? Saying it Again in Different ways Artificial neural network
More informationNeural Nets and Symbolic Reasoning Hopfield Networks
Neural Nets and Symbolic Reasoning Hopfield Networks Outline The idea of pattern completion The fast dynamics of Hopfield networks Learning with Hopfield networks Emerging properties of Hopfield networks
More informationAn artificial neural networks (ANNs) model is a functional abstraction of the
CHAPER 3 3. Introduction An artificial neural networs (ANNs) model is a functional abstraction of the biological neural structures of the central nervous system. hey are composed of many simple and highly
More informationLecture 7 Artificial neural networks: Supervised learning
Lecture 7 Artificial neural networks: Supervised learning Introduction, or how the brain works The neuron as a simple computing element The perceptron Multilayer neural networks Accelerated learning in
More informationInfluence of Criticality on 1/f α Spectral Characteristics of Cortical Neuron Populations
Influence of Criticality on 1/f α Spectral Characteristics of Cortical Neuron Populations Robert Kozma rkozma@memphis.edu Computational Neurodynamics Laboratory, Department of Computer Science 373 Dunn
More informationChapter 9: The Perceptron
Chapter 9: The Perceptron 9.1 INTRODUCTION At this point in the book, we have completed all of the exercises that we are going to do with the James program. These exercises have shown that distributed
More informationSupervisor: Prof. Stefano Spaccapietra Dr. Fabio Porto Student: Yuanjian Wang Zufferey. EPFL - Computer Science - LBD 1
Supervisor: Prof. Stefano Spaccapietra Dr. Fabio Porto Student: Yuanjian Wang Zufferey EPFL - Computer Science - LBD 1 Introduction Related Work Proposed Solution Implementation Important Results Conclusion
More informationCSE 352 (AI) LECTURE NOTES Professor Anita Wasilewska. NEURAL NETWORKS Learning
CSE 352 (AI) LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS Learning Neural Networks Classifier Short Presentation INPUT: classification data, i.e. it contains an classification (class) attribute.
More informationARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD
ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD WHAT IS A NEURAL NETWORK? The simplest definition of a neural network, more properly referred to as an 'artificial' neural network (ANN), is provided
More informationOriented majority-vote model in social dynamics
Author: Facultat de Física, Universitat de Barcelona, Diagonal 645, 08028 Barcelona, Spain. Advisor: M. Ángeles Serrano Mass events ruled by collective behaviour are present in our society every day. Some
More information1. Synchronization Phenomena
1. Synchronization Phenomena In nature synchronization happens all the time. In mechanical systems, in biological systems, in epidemiology, basically everywhere. When we talk about synchronization we usually
More informationData Mining Part 5. Prediction
Data Mining Part 5. Prediction 5.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline How the Brain Works Artificial Neural Networks Simple Computing Elements Feed-Forward Networks Perceptrons (Single-layer,
More informationBIOLOGY 111. CHAPTER 1: An Introduction to the Science of Life
BIOLOGY 111 CHAPTER 1: An Introduction to the Science of Life An Introduction to the Science of Life: Chapter Learning Outcomes 1.1) Describe the properties of life common to all living things. (Module
More informationBig Idea 1: The process of evolution drives the diversity and unity of life.
Big Idea 1: The process of evolution drives the diversity and unity of life. understanding 1.A: Change in the genetic makeup of a population over time is evolution. 1.A.1: Natural selection is a major
More informationInstability in Spatial Evolutionary Games
Instability in Spatial Evolutionary Games Carlos Grilo 1,2 and Luís Correia 2 1 Dep. Eng. Informática, Escola Superior de Tecnologia e Gestão, Instituto Politécnico de Leiria Portugal 2 LabMag, Dep. Informática,
More informationReduction of complex models using data-mining and nonlinear projection techniques
Reduction of complex models using data-mining and nonlinear projection techniques Bernhardt, K. a, Wirtz, K.W. a Institute for Chemistry and Biology of the Marine Environment (ICBM) Carl-von-Ossietzky
More informationMachine Learning. Neural Networks. (slides from Domingos, Pardo, others)
Machine Learning Neural Networks (slides from Domingos, Pardo, others) For this week, Reading Chapter 4: Neural Networks (Mitchell, 1997) See Canvas For subsequent weeks: Scaling Learning Algorithms toward
More informationStorage Capacity of Letter Recognition in Hopfield Networks
Storage Capacity of Letter Recognition in Hopfield Networks Gang Wei (gwei@cs.dal.ca) Zheyuan Yu (zyu@cs.dal.ca) Faculty of Computer Science, Dalhousie University, Halifax, N.S., Canada B3H 1W5 Abstract:
More informationClassic K -means clustering. Classic K -means example (K = 2) Finding the optimal w k. Finding the optimal s n J =
Review of classic (GOF K -means clustering x 2 Fall 2015 x 1 Lecture 8, February 24, 2015 K-means is traditionally a clustering algorithm. Learning: Fit K prototypes w k (the rows of some matrix, W to
More informationPublic Key Exchange by Neural Networks
Public Key Exchange by Neural Networks Zahir Tezcan Computer Engineering, Bilkent University, 06532 Ankara zahir@cs.bilkent.edu.tr Abstract. This work is a survey on the concept of neural cryptography,
More informationChaos and Liapunov exponents
PHYS347 INTRODUCTION TO NONLINEAR PHYSICS - 2/22 Chaos and Liapunov exponents Definition of chaos In the lectures we followed Strogatz and defined chaos as aperiodic long-term behaviour in a deterministic
More informationNeural Networks Lecture 4: Radial Bases Function Networks
Neural Networks Lecture 4: Radial Bases Function Networks H.A Talebi Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Winter 2011. A. Talebi, Farzaneh Abdollahi
More informationMemories Associated with Single Neurons and Proximity Matrices
Memories Associated with Single Neurons and Proximity Matrices Subhash Kak Oklahoma State University, Stillwater Abstract: This paper extends the treatment of single-neuron memories obtained by the use
More informationFeedforward Neural Nets and Backpropagation
Feedforward Neural Nets and Backpropagation Julie Nutini University of British Columbia MLRG September 28 th, 2016 1 / 23 Supervised Learning Roadmap Supervised Learning: Assume that we are given the features
More informationNeural Networks: Introduction
Neural Networks: Introduction Machine Learning Fall 2017 Based on slides and material from Geoffrey Hinton, Richard Socher, Dan Roth, Yoav Goldberg, Shai Shalev-Shwartz and Shai Ben-David, and others 1
More information8. Lecture Neural Networks
Soft Control (AT 3, RMA) 8. Lecture Neural Networks Learning Process Contents of the 8 th lecture 1. Introduction of Soft Control: Definition and Limitations, Basics of Intelligent" Systems 2. Knowledge
More informationA FUZZY NEURAL NETWORK MODEL FOR FORECASTING STOCK PRICE
A FUZZY NEURAL NETWORK MODEL FOR FORECASTING STOCK PRICE Li Sheng Institute of intelligent information engineering Zheiang University Hangzhou, 3007, P. R. China ABSTRACT In this paper, a neural network-driven
More informationBasic Principles of Unsupervised and Unsupervised
Basic Principles of Unsupervised and Unsupervised Learning Toward Deep Learning Shun ichi Amari (RIKEN Brain Science Institute) collaborators: R. Karakida, M. Okada (U. Tokyo) Deep Learning Self Organization
More informationUsing Variable Threshold to Increase Capacity in a Feedback Neural Network
Using Variable Threshold to Increase Capacity in a Feedback Neural Network Praveen Kuruvada Abstract: The article presents new results on the use of variable thresholds to increase the capacity of a feedback
More informationEffects of Interactive Function Forms and Refractoryperiod in a Self-Organized Critical Model Based on Neural Networks
Commun. Theor. Phys. (Beijing, China) 42 (2004) pp. 121 125 c International Academic Publishers Vol. 42, No. 1, July 15, 2004 Effects of Interactive Function Forms and Refractoryperiod in a Self-Organized
More informationLecture 4: Feed Forward Neural Networks
Lecture 4: Feed Forward Neural Networks Dr. Roman V Belavkin Middlesex University BIS4435 Biological neurons and the brain A Model of A Single Neuron Neurons as data-driven models Neural Networks Training
More informationConvergence of Hybrid Algorithm with Adaptive Learning Parameter for Multilayer Neural Network
Convergence of Hybrid Algorithm with Adaptive Learning Parameter for Multilayer Neural Network Fadwa DAMAK, Mounir BEN NASR, Mohamed CHTOUROU Department of Electrical Engineering ENIS Sfax, Tunisia {fadwa_damak,
More informationPlan. Perceptron Linear discriminant. Associative memories Hopfield networks Chaotic networks. Multilayer perceptron Backpropagation
Neural Networks Plan Perceptron Linear discriminant Associative memories Hopfield networks Chaotic networks Multilayer perceptron Backpropagation Perceptron Historically, the first neural net Inspired
More informationAlgorithms for Learning Good Step Sizes
1 Algorithms for Learning Good Step Sizes Brian Zhang (bhz) and Manikant Tiwari (manikant) with the guidance of Prof. Tim Roughgarden I. MOTIVATION AND PREVIOUS WORK Many common algorithms in machine learning,
More informationA Modified Earthquake Model Based on Generalized Barabási Albert Scale-Free
Commun. Theor. Phys. (Beijing, China) 46 (2006) pp. 1011 1016 c International Academic Publishers Vol. 46, No. 6, December 15, 2006 A Modified Earthquake Model Based on Generalized Barabási Albert Scale-Free
More informationHierarchy. Will Penny. 24th March Hierarchy. Will Penny. Linear Models. Convergence. Nonlinear Models. References
24th March 2011 Update Hierarchical Model Rao and Ballard (1999) presented a hierarchical model of visual cortex to show how classical and extra-classical Receptive Field (RF) effects could be explained
More information1. A discrete-time recurrent network is described by the following equation: y(n + 1) = A y(n) + B x(n)
Neuro-Fuzzy, Revision questions June, 25. A discrete-time recurrent network is described by the following equation: y(n + ) = A y(n) + B x(n) where A =.7.5.4.6, B = 2 (a) Sketch the dendritic and signal-flow
More informationARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92
ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92 BIOLOGICAL INSPIRATIONS Some numbers The human brain contains about 10 billion nerve cells (neurons) Each neuron is connected to the others through 10000
More informationCMSC 421: Neural Computation. Applications of Neural Networks
CMSC 42: Neural Computation definition synonyms neural networks artificial neural networks neural modeling connectionist models parallel distributed processing AI perspective Applications of Neural Networks
More informationFinancial Informatics XVII:
Financial Informatics XVII: Unsupervised Learning Khurshid Ahmad, Professor of Computer Science, Department of Computer Science Trinity College, Dublin-, IRELAND November 9 th, 8. https://www.cs.tcd.ie/khurshid.ahmad/teaching.html
More informationEffect of number of hidden neurons on learning in large-scale layered neural networks
ICROS-SICE International Joint Conference 009 August 18-1, 009, Fukuoka International Congress Center, Japan Effect of on learning in large-scale layered neural networks Katsunari Shibata (Oita Univ.;
More informationEnduring understanding 1.A: Change in the genetic makeup of a population over time is evolution.
The AP Biology course is designed to enable you to develop advanced inquiry and reasoning skills, such as designing a plan for collecting data, analyzing data, applying mathematical routines, and connecting
More informationMachine Learning. Neural Networks. (slides from Domingos, Pardo, others)
Machine Learning Neural Networks (slides from Domingos, Pardo, others) For this week, Reading Chapter 4: Neural Networks (Mitchell, 1997) See Canvas For subsequent weeks: Scaling Learning Algorithms toward
More information