λ-universe: Introduction and Preliminary Study
|
|
- Brooke Bradford
- 5 years ago
- Views:
Transcription
1 λ-universe: Introduction and Preliminary Study ABDOLREZA JOGHATAIE CE College Sharif University of Technology Azadi Avenue, Tehran IRAN Abstract: - Interactions between the members of an imaginary universe, where all the members are adaptive and have learning capability, is simulated numerically by using artificial neural networks. The universe is called λ-universe for brevity. It is shown here that in such a universe, rules governing the behavior of the members might be formed inside the universe by its own members and the randomness which is observed in the behavior of its members is a direct result of the learning capability of the members. Although this is not a simulation of the real universe, some fundamental concepts of astrophysics have been implemented in it. Key-Words: - Neural Networks, Interaction, λ-universe, Adaptivity, Astrophysics, Learning 1 Introduction Learning the cause-effect relationship governing a phenomenon is the main task most researchers expect that artificial learning systems, specifically artificial neural networks, can do well [1,2]. Systems control [3], as one of the most complicated problems in engineering and mechanics, has been well undertaken by neural networks where a neural network is trained to learn to serve as the brain for controlling the system under study [4-7]. In sociology and linguistics too, interactions among intelligent individuals have been simulated recently and it has been shown that states of common knowledge [8] and linguistic rules [9] may emerge as direct results of the exchange of knowledge between the individuals, which is called here the principle of learning from each other. Although learning is mainly considered a high level biological activity in the nature, however, the question raised here is that how would the universe be if every part of it, specifically every part of the matter, had some sort of adaptivity and learning capability? Is it possible that the astrophysical rules that we observe be the outcome of the activities inside the matter itself? One first approach to answer this problem is by numerical simulation. Recently in astrophysics too, computer simulations have provided a better understanding of the stellar evolution and galactic kinematics [10]. Here, results of a preliminary study of the numerical simulation of such a hypothetical universe made up of adaptive members is reported. The members interact with each other and some general rules emerge as a result of their interactions. We may assume that there are some fundamental particles similar to the synaptic transmitters in biological synapses [11,12] which are responsible for this adaptivity and learning of the matter. Some features of this universe, specially the evolution and appearance of rules governing the interactions between particles constituting the matter, are studied here. As an example, the Newton s third law of mechanics, which states the equality of action and reaction forces between two interacting bodies, has been selected for demonstration, where it is shown that this law can be generated by the learning matter itself and also that the 1
2 size of the action and reaction are not a-priori known. Instead, they are determined throughout the time with the aging of the learning universe. This study is not a simulation of the real universe and it was not intended to draw based on this study any conclusion about the evolution of the real universe. However some similarities between this imaginary universe, from its birth and generation to its evolution, and the real universe is noteworthy. It is then conclude that, in such a universe, 1) rules are generated by the matter itself and are ever changing 2) randomness is the direct result of learning capability of the matter, 3) different sets of rules and material properties might evolve in isolated large bodies of such a universe and 4) rules governing and properties of the learning matter depend on its initial conditions of the learning universe. The imaginary learning universe is, for abbreviation purposes and also for its distinguishing from the real universe, denoted by λ-universe, where λ stands for learning. The λ-universe is assumed to be comprised of imaginary fundamental particles, atoms, molecules, and larger bodies such as stellar constellations and galaxies, here denoted by λ- particles, λ-atoms, λ-galaxies, etc. It is assumed here that every subset or member of this universe, called a λ-member, is adaptive whether it is a λ-fundamental particle, a λ-atom or a λ-galaxy. One can easily extend this idea to the interactions between any two members of the λ-universe, for example even between a λ- fundamental particle and a λ-galaxy. 2 Numerical Study To provide some insight about the idea of the λ- Universe, a numerical study was undertaken. The following sections contain information about the assumptions and results of the study. 2.1 Simulation of the λ-members Each adaptive λ-member was simulated by a three layer feed-forward neural network (perceptron) with 1 input, 1 output and 3 hidden units, resulting in 6 connection weights as shown in Fig. 1. This is one of the simplest forms that could be considered for an artificial neural network. The interested reader may refer to any text on the fundamentals of neural networks such as [1] for more details about multi layer feed forward neural networks. Since information in a perceptron is kept in its connection weights, all of the perceptrons which have the same architecture possess the same learning capacity. Hence all the λ-members had the same learning capacity; however they were distinguished from each other by: (1) their activation functions and (2) the values of their connection weights. 2.2 λ-membership Types Then, it was assumed that the activation functions are responsible for the determination of the type of a λ-member. For example, if the member is an atom, its type can be gold, silver, oxygen, etc. In this first attack to the problem, only 3 types of λ-members, denoted by T 1, T 2 and T 3 were assumed existing in our λ- universe. The same activation function was used for all of the processing units in all of the neural networks representing the λ-members of a specific type i as: F i (z)=a i +b i + 2/(1+e -γz ), i=1,2,3 (1) where z is the input to the activation function of a unit, which is equal to the weighted sum of the inputs received by the unit from its preceding units, γ is a parameter which plays a key role in the learning capability of the neural networks [1], assumed a universal constant in this study. Also a i and b i represent two λ-membership type constants. In this study the following arbitrary values of 2
3 γ=0.5, a i =0.1i and b i =0.2i, i=1,2 and 3 (2) were selected. If N = number of λ-members in the λ-universe, and N i = number of λ-members of type T i, i=1,2 and 3, then N= N 1 +N 2 +N 3. The population of λ-members in this first study of the subject was chosen a small one, i.e. N=20. Also it was decided to randomly distribute the λ-members in the λ-membership types T 1, T 2 and T 3, resulting in N 1 =9, N 2 =4 and N 3 = Generation of Initial Connection Weights As Initial Conditions Not all of the individual λ-members, belonging to a specific λ-membership type, were considered exactly the same. They were differentiated by their initial connection weights. At the beginning of the simulation, or in other words right after the birth of our imaginary λ- universe, the values of the connection weights of the λ-members were specified randomly. This was the Big Bang in our imaginary λ- universe, responsible for the generation of variety and randomness in the properties, behavior and rules governing the λ-matter filling the imaginary λ-universe. We have borrowed the terms birth of and Big-Bang from astrophysics, because the numerical process of the generation of the initial connection weights for the λ-members seemed very similar to the early stages of the real universe where the universe was hot, dense, irregular and unisotropic [13,14,15,16]. So, the knowledge content of each λ-member was different form the others in general. However the λ-atoms were bounded to or separated from each other by their λ-membership type numbers. It was included in the simulation that upon the interaction of two λ-members, say λ-members A and B, belonging to the same or two different λ- membership types, each had to receive as input the type number of the other λ-member. The output of each of them was then its response to the other one. Hence in this model of an imaginary universe of λ-matter, a λ-member was known to the other λ-members just by its type number. However it was receiving different reactions from the other λ-members, even from the λ-members belonging to the same type, because of their different knowledge contents. This means that right after the Big Bang in our imaginary universe, each λ-member was following its own rule to respond to the other λ- members. So, rules were local and there was no general rule governing the response of the λ- membership types to one another. 2.4 Interactions Between The λ-members Knowledge Content as One of the properties of the λ-members Similar to the real universe where particles interact through exchanging their properties such as spin, isospin, energy, mass, color, etc. [14], it was assumed in this study that the knowledge content of a λ-member was also one of its properties which had to be exchanged during its interactions with the other λ- members. Hence it was assumed that each two interacting λ-members had to follow the socalled principle of learning from each other which states that any two interacting learning members should try, or are forced, to reduce the difference in their mutual responses through the gradual updating of their connection weights [8]. To this end, each λ-member was using as its target output the output of the other λ-member Learning Rule and Updating of Connection Weights In this simulation, the back-propagation rule of learning [17] was used for the updating of connection weights. Interactions among λ- members occured during discrete time steps where at each time step only two λ-members 3
4 were selected randomly from the universe of the λ-members, their responses to each other were computed, and using a small appropriate learning rate, here lr=0.0002, their connection weights were updated so that their outputs got closer to each other and the difference between their outputs was reduced by a small amount. A schematic representation of this concept of interaction and knowledge exchange between two λ-members is shown in Fig Results of Numerical Simulations Numerical simulations, based on the above mentioned assumptions about the λ-matter and also the methodology of simulating interactions between the λ-members, show that after quite a large number of updating cycles, local rules disappear and more general rules emerge. These general rules are specifying the type to type response of the λ-members. To provide a more quantitative understanding of the evolution process, for each two types i,j = 1,2 and 3, the mean and standard deviation of the rules governing the type to type responses were monitored thorough the time. To this end, at each simulation time step, type number j was fed as input to all of the λ-members of type i and the mean E ij and the standard deviation S ij of the observed outputs were calculated. As an example, the following E and S matrices which have been calculated right after the Big Bang as well as after a large number of interactions, i.e. a total of 10,0000,000 interactions among all of the λ-members, are reported here: E 0 = (3) S 0 = (4) E L = (5) S L = (6) where subscripts 0 and L mean after 0 and after a large number of interactions respectively. It is noteworthy that the matrix E 0 =[ E ij ] 0 has not been symmetric at the onset of interactions. However with the advent of time and more interactions, it has got closer to a symmetric form which is of course an evidence for the gradual appearance of the third Newton s law. Another point to notice here, is the gradual convergence of the S matrix to 0 as the imaginary universe has got older and older. It means that with aging, the rules in this universe have become more and more refined, simpler and universal. As can be seen, all the elements of the S matrix have converged to 0 while E has changed dramatically but has not converged to 0. The convergence of S has been, according to the numerical observations, essentially monotonic as represented in Fig. 3 where each curve shows the variation of one of the 9 elements of the S matrix as a function of the simulation time step for up to 10,000,000 time steps. 3.1 Effect of Initial Conditions Next it was desired to study the effect of the initial conditions of the λ-universe on the final results of the interactions. Further numerical studies revealed that in most of the situations the E and S matrices have converged to symmetry and 0 respectively, although E has been different for different initial conditions in general. However there have also been many initial conditions for which such convergence has not been observed even after a large number of interactions, which may be considered as an indication of the probable instability of the 4
5 generated λ-universe. Hence the initial conditions of the λ-universe plays a vital role in its stability and the development of its rules. More research is being done on more complicated situations where the input vector to a λ-member is many dimensional, containing information for example about the position, mass and other properties attached with the λ- member itself as well as the other λ-members which are interacting with it. Also a many dimensional output can be considered for a λ- member, accounting for the exchange of its different properties with the other λ-members. Finally the architecture of a λ-member can be considered a function of its λ-membership type. 4. Concluding Remarks The results obtained in this numerical study of the imaginary universe of λ-members can now be summarized as: (1) If not all, some of the general rules such as the rule of equality of action and reaction governing the λ-members may emerge as the results of interactions in the λ-universe. These rules which are changing through out the time are generated by the learning matter itself, and are not imposed on it from the beginning. (2) Rules governing a λ-universe are generally functions of the initial conditions of the λ- matter. In this study, the randomly generated connection weights of the neural networks have served as the initial conditions of the λ- matter. This numerical process of generating randomness in the λ-matter can be considered as a Big-Bang in theλ-universe. (3) The randomness which is observed in the response of the members belonging to a specific λ-membership type to the λ- members of the another type is a direct result of the difference in their knowledge content. The S L matrix in equation (6) which is not exactly 0 supports this statement. References [1] D. E. Rumelhart and J.L. Mc Clelland, Parallel Distributed Processing, V.1: Foundations, MIT Press, Cambridge MA, [2] K. Kang, J. H. Oh and C. Kwon, Learning by a Population of Perceptrons, Phys. Rev. E, vol.55, 1997, pp [3] L. Meirovitch, Dynamics and Control of Structures, Wiley, New York, [4] A. Joghataie, Neural Network and Fuzzy Logic in Structural Control, Ph.D. thesis, University of Illinois at Urbana-Champaign, [5] R. Bakker, J. Schouten, F. Takens and C.M. van den Bleek, Neural Network Model to Control an Experimental Chaotic Pendulum, Phys. Rev. E,54, 1996, pp [6] E.R. Weeks and J.M. Burgess, Evolving Artificial Neural Networks to Control Chaotic Systems, Phys. Rev. E, vol. 56, 1997, pp [7] D.A. White and D.A. Sofge, Handbook of Intelligent Control, Van Nostrand Reinhorn, New York, [8] A. Joghataie, Communication in a Society of Interacting Perceptrons, Proceedings of IEEE International Conference on Systems, man and Cybernetics, SMC 99, Tokyo, Japan, October 12-15, 1999, pp. V-149 to V-153. [9] E. Hutchins and B. Hazelhurst, How to invent a lexicon: the development of shared symbols in interaction, in Artificial Societies, the Computer Simulation of Life, Edited by N. Gilbert, UCL Press Ltd., [10] J.M.A. Danby, R. Kouzes and C. Witney, Astrophysics Simulations, Wiley, New York,1995. [11] G. M. Shepherd, Neurobiology, Oxford Univ. Press, New York, [12] J.A. Anderson E. Rosenfeld, Neurocomputing: Foundations of Research, MIT Press, Cambridge, [13] J. Silk, The Big Bang: the Creation and 5
6 Evolution of the Universe, W. H. Freedman and Co., San Francisco, [14] I.L. Rosental, Big Bang Big Bounce: How Particles and Fields Drive Cosmic Evolution, English Translation, Springer Verlag, Berlin Heidelberg, [15] A. Fairall, Large Scale Structures in the Universe, Wiley-Praxis, Chichester, [16] R. Ellis, The Formation and Evolution of Galaxies, Nature, vol. 395, 1998, pp. A3- A8. [17] D.E. Rumelhart, G.E. Hinton and R.J. Williams, Learning Representations by Backpropagating Errors, Nature, vol. 323, 1996, pp Input Output FIG. 1. Architecture of the multi-layer feedforward neural networks used in this study Standard Deviations, Sij, I,j=1,2, Simulation Time Steps j Member: A, type :i i FIG. 3 Each Curve is Related to one of the 9 Elements of the S matrix Member: B, type :j i,j = 1,2,3 FIG.2. Interaction and exchange of information between two λ-members 6
7 7
Neural Networks and the Back-propagation Algorithm
Neural Networks and the Back-propagation Algorithm Francisco S. Melo In these notes, we provide a brief overview of the main concepts concerning neural networks and the back-propagation algorithm. We closely
More informationLearning and Memory in Neural Networks
Learning and Memory in Neural Networks Guy Billings, Neuroinformatics Doctoral Training Centre, The School of Informatics, The University of Edinburgh, UK. Neural networks consist of computational units
More informationIntroduction Biologically Motivated Crude Model Backpropagation
Introduction Biologically Motivated Crude Model Backpropagation 1 McCulloch-Pitts Neurons In 1943 Warren S. McCulloch, a neuroscientist, and Walter Pitts, a logician, published A logical calculus of the
More informationARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD
ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD WHAT IS A NEURAL NETWORK? The simplest definition of a neural network, more properly referred to as an 'artificial' neural network (ANN), is provided
More informationNeuro -Finite Element Static Analysis of Structures by Assembling Elemental Neuro -Modelers
Neuro -Finite Element Static Analysis of Structures by Assembling Elemental Neuro -Modelers Abdolreza Joghataie Associate Prof., Civil Engineering Department, Sharif University of Technology, Tehran, Iran.
More informationOn the convergence speed of artificial neural networks in the solving of linear systems
Available online at http://ijimsrbiauacir/ Int J Industrial Mathematics (ISSN 8-56) Vol 7, No, 5 Article ID IJIM-479, 9 pages Research Article On the convergence speed of artificial neural networks in
More informationINTRODUCTION TO NEURAL NETWORKS
INTRODUCTION TO NEURAL NETWORKS R. Beale & T.Jackson: Neural Computing, an Introduction. Adam Hilger Ed., Bristol, Philadelphia and New York, 990. THE STRUCTURE OF THE BRAIN The brain consists of about
More informationA Neuro-Fuzzy Scheme for Integrated Input Fuzzy Set Selection and Optimal Fuzzy Rule Generation for Classification
A Neuro-Fuzzy Scheme for Integrated Input Fuzzy Set Selection and Optimal Fuzzy Rule Generation for Classification Santanu Sen 1 and Tandra Pal 2 1 Tejas Networks India Ltd., Bangalore - 560078, India
More informationKeywords- Source coding, Huffman encoding, Artificial neural network, Multilayer perceptron, Backpropagation algorithm
Volume 4, Issue 5, May 2014 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Huffman Encoding
More informationEstimation of Inelastic Response Spectra Using Artificial Neural Networks
Estimation of Inelastic Response Spectra Using Artificial Neural Networks J. Bojórquez & S.E. Ruiz Universidad Nacional Autónoma de México, México E. Bojórquez Universidad Autónoma de Sinaloa, México SUMMARY:
More informationArtificial Neural Network
Artificial Neural Network Contents 2 What is ANN? Biological Neuron Structure of Neuron Types of Neuron Models of Neuron Analogy with human NN Perceptron OCR Multilayer Neural Network Back propagation
More informationLecture 7 Artificial neural networks: Supervised learning
Lecture 7 Artificial neural networks: Supervised learning Introduction, or how the brain works The neuron as a simple computing element The perceptron Multilayer neural networks Accelerated learning in
More information(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann
(Feed-Forward) Neural Networks 2016-12-06 Dr. Hajira Jabeen, Prof. Jens Lehmann Outline In the previous lectures we have learned about tensors and factorization methods. RESCAL is a bilinear model for
More informationApplication of Artificial Neural Networks in Evaluation and Identification of Electrical Loss in Transformers According to the Energy Consumption
Application of Artificial Neural Networks in Evaluation and Identification of Electrical Loss in Transformers According to the Energy Consumption ANDRÉ NUNES DE SOUZA, JOSÉ ALFREDO C. ULSON, IVAN NUNES
More informationIntroduction to Artificial Neural Networks
Facultés Universitaires Notre-Dame de la Paix 27 March 2007 Outline 1 Introduction 2 Fundamentals Biological neuron Artificial neuron Artificial Neural Network Outline 3 Single-layer ANN Perceptron Adaline
More informationArtificial Neural Networks. Historical description
Artificial Neural Networks Historical description Victor G. Lopez 1 / 23 Artificial Neural Networks (ANN) An artificial neural network is a computational model that attempts to emulate the functions of
More informationEffects of Interactive Function Forms in a Self-Organized Critical Model Based on Neural Networks
Commun. Theor. Phys. (Beijing, China) 40 (2003) pp. 607 613 c International Academic Publishers Vol. 40, No. 5, November 15, 2003 Effects of Interactive Function Forms in a Self-Organized Critical Model
More informationInternational Journal of Advanced Research in Computer Science and Software Engineering
Volume 3, Issue 4, April 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Application of
More informationARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92
ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92 BIOLOGICAL INSPIRATIONS Some numbers The human brain contains about 10 billion nerve cells (neurons) Each neuron is connected to the others through 10000
More informationAddress for Correspondence
Research Article APPLICATION OF ARTIFICIAL NEURAL NETWORK FOR INTERFERENCE STUDIES OF LOW-RISE BUILDINGS 1 Narayan K*, 2 Gairola A Address for Correspondence 1 Associate Professor, Department of Civil
More informationCombination of M-Estimators and Neural Network Model to Analyze Inside/Outside Bark Tree Diameters
Combination of M-Estimators and Neural Network Model to Analyze Inside/Outside Bark Tree Diameters Kyriaki Kitikidou, Elias Milios, Lazaros Iliadis, and Minas Kaymakis Democritus University of Thrace,
More informationApproximate solutions of dual fuzzy polynomials by feed-back neural networks
Available online at wwwispacscom/jsca Volume 2012, Year 2012 Article ID jsca-00005, 16 pages doi:105899/2012/jsca-00005 Research Article Approximate solutions of dual fuzzy polynomials by feed-back neural
More informationArtificial Neural Networks
Introduction ANN in Action Final Observations Application: Poverty Detection Artificial Neural Networks Alvaro J. Riascos Villegas University of los Andes and Quantil July 6 2018 Artificial Neural Networks
More information) (d o f. For the previous layer in a neural network (just the rightmost layer if a single neuron), the required update equation is: 2.
1 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.034 Artificial Intelligence, Fall 2011 Recitation 8, November 3 Corrected Version & (most) solutions
More informationData Mining Part 5. Prediction
Data Mining Part 5. Prediction 5.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline How the Brain Works Artificial Neural Networks Simple Computing Elements Feed-Forward Networks Perceptrons (Single-layer,
More informationAsymptotic Convergence of Backpropagation: Numerical Experiments
606 Ahmad, Thsauro and He Asymptotic Convergence of Backpropagation: Numerical Experiments Subutai Ahmad ICSI 1947 Center St. Berkeley, CA 94704 Gerald Tesauro mm Watson Labs. P. O. Box 704 Yorktown Heights,
More informationBidirectional Representation and Backpropagation Learning
Int'l Conf on Advances in Big Data Analytics ABDA'6 3 Bidirectional Representation and Bacpropagation Learning Olaoluwa Adigun and Bart Koso Department of Electrical Engineering Signal and Image Processing
More information2015 Todd Neller. A.I.M.A. text figures 1995 Prentice Hall. Used by permission. Neural Networks. Todd W. Neller
2015 Todd Neller. A.I.M.A. text figures 1995 Prentice Hall. Used by permission. Neural Networks Todd W. Neller Machine Learning Learning is such an important part of what we consider "intelligence" that
More informationTHE system is controlled by various approaches using the
, 23-25 October, 2013, San Francisco, USA A Study on Using Multiple Sets of Particle Filters to Control an Inverted Pendulum Midori Saito and Ichiro Kobayashi Abstract The dynamic system is controlled
More informationToward a Better Understanding of Complexity
Toward a Better Understanding of Complexity Definitions of Complexity, Cellular Automata as Models of Complexity, Random Boolean Networks Christian Jacob jacob@cpsc.ucalgary.ca Department of Computer Science
More informationProtein Structure Prediction Using Multiple Artificial Neural Network Classifier *
Protein Structure Prediction Using Multiple Artificial Neural Network Classifier * Hemashree Bordoloi and Kandarpa Kumar Sarma Abstract. Protein secondary structure prediction is the method of extracting
More informationTemporal Backpropagation for FIR Neural Networks
Temporal Backpropagation for FIR Neural Networks Eric A. Wan Stanford University Department of Electrical Engineering, Stanford, CA 94305-4055 Abstract The traditional feedforward neural network is a static
More informationBack-Propagation Algorithm. Perceptron Gradient Descent Multilayered neural network Back-Propagation More on Back-Propagation Examples
Back-Propagation Algorithm Perceptron Gradient Descent Multilayered neural network Back-Propagation More on Back-Propagation Examples 1 Inner-product net =< w, x >= w x cos(θ) net = n i=1 w i x i A measure
More informationARTIFICIAL INTELLIGENCE. Artificial Neural Networks
INFOB2KI 2017-2018 Utrecht University The Netherlands ARTIFICIAL INTELLIGENCE Artificial Neural Networks Lecturer: Silja Renooij These slides are part of the INFOB2KI Course Notes available from www.cs.uu.nl/docs/vakken/b2ki/schema.html
More informationIntroduction and Perceptron Learning
Artificial Neural Networks Introduction and Perceptron Learning CPSC 565 Winter 2003 Christian Jacob Department of Computer Science University of Calgary Canada CPSC 565 - Winter 2003 - Emergent Computing
More informationSIMULATION OF FREEZING AND FROZEN SOIL BEHAVIOURS USING A RADIAL BASIS FUNCTION NEURAL NETWORK
SIMULATION OF FREEZING AND FROZEN SOIL BEHAVIOURS USING A RADIAL BASIS FUNCTION NEURAL NETWORK Z.X. Zhang 1, R.L. Kushwaha 2 Department of Agricultural and Bioresource Engineering University of Saskatchewan,
More informationFeedforward Neural Nets and Backpropagation
Feedforward Neural Nets and Backpropagation Julie Nutini University of British Columbia MLRG September 28 th, 2016 1 / 23 Supervised Learning Roadmap Supervised Learning: Assume that we are given the features
More information2- AUTOASSOCIATIVE NET - The feedforward autoassociative net considered in this section is a special case of the heteroassociative net.
2- AUTOASSOCIATIVE NET - The feedforward autoassociative net considered in this section is a special case of the heteroassociative net. - For an autoassociative net, the training input and target output
More information8. Lecture Neural Networks
Soft Control (AT 3, RMA) 8. Lecture Neural Networks Learning Process Contents of the 8 th lecture 1. Introduction of Soft Control: Definition and Limitations, Basics of Intelligent" Systems 2. Knowledge
More informationEquivalence of Backpropagation and Contrastive Hebbian Learning in a Layered Network
LETTER Communicated by Geoffrey Hinton Equivalence of Backpropagation and Contrastive Hebbian Learning in a Layered Network Xiaohui Xie xhx@ai.mit.edu Department of Brain and Cognitive Sciences, Massachusetts
More informationNeural Network Identification of Non Linear Systems Using State Space Techniques.
Neural Network Identification of Non Linear Systems Using State Space Techniques. Joan Codina, J. Carlos Aguado, Josep M. Fuertes. Automatic Control and Computer Engineering Department Universitat Politècnica
More informationIN neural-network training, the most well-known online
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 10, NO. 1, JANUARY 1999 161 On the Kalman Filtering Method in Neural-Network Training and Pruning John Sum, Chi-sing Leung, Gilbert H. Young, and Wing-kay Kan
More informationCOMP9444 Neural Networks and Deep Learning 2. Perceptrons. COMP9444 c Alan Blair, 2017
COMP9444 Neural Networks and Deep Learning 2. Perceptrons COMP9444 17s2 Perceptrons 1 Outline Neurons Biological and Artificial Perceptron Learning Linear Separability Multi-Layer Networks COMP9444 17s2
More informationCSCI 252: Neural Networks and Graphical Models. Fall Term 2016 Prof. Levy. Architecture #7: The Simple Recurrent Network (Elman 1990)
CSCI 252: Neural Networks and Graphical Models Fall Term 2016 Prof. Levy Architecture #7: The Simple Recurrent Network (Elman 1990) Part I Multi-layer Neural Nets Taking Stock: What can we do with neural
More informationAnalysis of Multilayer Neural Network Modeling and Long Short-Term Memory
Analysis of Multilayer Neural Network Modeling and Long Short-Term Memory Danilo López, Nelson Vera, Luis Pedraza International Science Index, Mathematical and Computational Sciences waset.org/publication/10006216
More informationMODELLING OF METALLURGICAL PROCESSES USING CHAOS THEORY AND HYBRID COMPUTATIONAL INTELLIGENCE
MODELLING OF METALLURGICAL PROCESSES USING CHAOS THEORY AND HYBRID COMPUTATIONAL INTELLIGENCE J. Krishanaiah, C. S. Kumar, M. A. Faruqi, A. K. Roy Department of Mechanical Engineering, Indian Institute
More informationComputational Intelligence Winter Term 2017/18
Computational Intelligence Winter Term 207/8 Prof. Dr. Günter Rudolph Lehrstuhl für Algorithm Engineering (LS ) Fakultät für Informatik TU Dortmund Plan for Today Single-Layer Perceptron Accelerated Learning
More informationNONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition
NONLINEAR CLASSIFICATION AND REGRESSION Nonlinear Classification and Regression: Outline 2 Multi-Layer Perceptrons The Back-Propagation Learning Algorithm Generalized Linear Models Radial Basis Function
More informationComputation with phase oscillators: an oscillatory perceptron model
Computation with phase oscillators: an oscillatory perceptron model Pablo Kaluza Abteilung Physikalische Chemie, Fritz-Haber-Institut der Max-Planck-Gesellschaft, Faradayweg 4-6, 495 Berlin, Germany. Abstract
More informationEE04 804(B) Soft Computing Ver. 1.2 Class 2. Neural Networks - I Feb 23, Sasidharan Sreedharan
EE04 804(B) Soft Computing Ver. 1.2 Class 2. Neural Networks - I Feb 23, 2012 Sasidharan Sreedharan www.sasidharan.webs.com 3/1/2012 1 Syllabus Artificial Intelligence Systems- Neural Networks, fuzzy logic,
More informationArtificial Intelligence
Artificial Intelligence Jeff Clune Assistant Professor Evolving Artificial Intelligence Laboratory Announcements Be making progress on your projects! Three Types of Learning Unsupervised Supervised Reinforcement
More informationOn the complexity of shallow and deep neural network classifiers
On the complexity of shallow and deep neural network classifiers Monica Bianchini and Franco Scarselli Department of Information Engineering and Mathematics University of Siena Via Roma 56, I-53100, Siena,
More information4. Multilayer Perceptrons
4. Multilayer Perceptrons This is a supervised error-correction learning algorithm. 1 4.1 Introduction A multilayer feedforward network consists of an input layer, one or more hidden layers, and an output
More informationArtificial Neural Network : Training
Artificial Neural Networ : Training Debasis Samanta IIT Kharagpur debasis.samanta.iitgp@gmail.com 06.04.2018 Debasis Samanta (IIT Kharagpur) Soft Computing Applications 06.04.2018 1 / 49 Learning of neural
More informationChapter 9: The Perceptron
Chapter 9: The Perceptron 9.1 INTRODUCTION At this point in the book, we have completed all of the exercises that we are going to do with the James program. These exercises have shown that distributed
More informationANN and Statistical Theory Based Forecasting and Analysis of Power System Variables
ANN and Statistical Theory Based Forecasting and Analysis of Power System Variables Sruthi V. Nair 1, Poonam Kothari 2, Kushal Lodha 3 1,2,3 Lecturer, G. H. Raisoni Institute of Engineering & Technology,
More informationArtificial Neural Networks D B M G. Data Base and Data Mining Group of Politecnico di Torino. Elena Baralis. Politecnico di Torino
Artificial Neural Networks Data Base and Data Mining Group of Politecnico di Torino Elena Baralis Politecnico di Torino Artificial Neural Networks Inspired to the structure of the human brain Neurons as
More informationAn artificial neural networks (ANNs) model is a functional abstraction of the
CHAPER 3 3. Introduction An artificial neural networs (ANNs) model is a functional abstraction of the biological neural structures of the central nervous system. hey are composed of many simple and highly
More informationRESPONSE PREDICTION OF STRUCTURAL SYSTEM SUBJECT TO EARTHQUAKE MOTIONS USING ARTIFICIAL NEURAL NETWORK
ASIAN JOURNAL OF CIVIL ENGINEERING (BUILDING AND HOUSING) VOL. 7, NO. 3 (006) PAGES 301-308 RESPONSE PREDICTION OF STRUCTURAL SYSTEM SUBJECT TO EARTHQUAKE MOTIONS USING ARTIFICIAL NEURAL NETWORK S. Chakraverty
More informationUnit III. A Survey of Neural Network Model
Unit III A Survey of Neural Network Model 1 Single Layer Perceptron Perceptron the first adaptive network architecture was invented by Frank Rosenblatt in 1957. It can be used for the classification of
More informationLecture 2. G. Cowan Lectures on Statistical Data Analysis Lecture 2 page 1
Lecture 2 1 Probability (90 min.) Definition, Bayes theorem, probability densities and their properties, catalogue of pdfs, Monte Carlo 2 Statistical tests (90 min.) general concepts, test statistics,
More informationarxiv:quant-ph/ v1 17 Oct 1995
PHYSICS AND CONSCIOUSNESS Patricio Pérez arxiv:quant-ph/9510017v1 17 Oct 1995 Departamento de Física, Universidad de Santiago de Chile Casilla 307, Correo 2, Santiago, Chile ABSTRACT Some contributions
More informationLogic Learning in Hopfield Networks
Logic Learning in Hopfield Networks Saratha Sathasivam (Corresponding author) School of Mathematical Sciences, University of Science Malaysia, Penang, Malaysia E-mail: saratha@cs.usm.my Wan Ahmad Tajuddin
More informationComputational Intelligence
Plan for Today Single-Layer Perceptron Computational Intelligence Winter Term 00/ Prof. Dr. Günter Rudolph Lehrstuhl für Algorithm Engineering (LS ) Fakultät für Informatik TU Dortmund Accelerated Learning
More informationFuzzy Cognitive Maps Learning through Swarm Intelligence
Fuzzy Cognitive Maps Learning through Swarm Intelligence E.I. Papageorgiou,3, K.E. Parsopoulos 2,3, P.P. Groumpos,3, and M.N. Vrahatis 2,3 Department of Electrical and Computer Engineering, University
More informationSPSS, University of Texas at Arlington. Topics in Machine Learning-EE 5359 Neural Networks
Topics in Machine Learning-EE 5359 Neural Networks 1 The Perceptron Output: A perceptron is a function that maps D-dimensional vectors to real numbers. For notational convenience, we add a zero-th dimension
More informationNeural Networks. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington
Neural Networks CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington 1 Perceptrons x 0 = 1 x 1 x 2 z = h w T x Output: z x D A perceptron
More informationESANN'2001 proceedings - European Symposium on Artificial Neural Networks Bruges (Belgium), April 2001, D-Facto public., ISBN ,
Relevance determination in learning vector quantization Thorsten Bojer, Barbara Hammer, Daniel Schunk, and Katharina Tluk von Toschanowitz University of Osnabrück, Department of Mathematics/ Computer Science,
More informationSupervised (BPL) verses Hybrid (RBF) Learning. By: Shahed Shahir
Supervised (BPL) verses Hybrid (RBF) Learning By: Shahed Shahir 1 Outline I. Introduction II. Supervised Learning III. Hybrid Learning IV. BPL Verses RBF V. Supervised verses Hybrid learning VI. Conclusion
More informationA Novel Activity Detection Method
A Novel Activity Detection Method Gismy George P.G. Student, Department of ECE, Ilahia College of,muvattupuzha, Kerala, India ABSTRACT: This paper presents an approach for activity state recognition of
More informationA SEASONAL FUZZY TIME SERIES FORECASTING METHOD BASED ON GUSTAFSON-KESSEL FUZZY CLUSTERING *
No.2, Vol.1, Winter 2012 2012 Published by JSES. A SEASONAL FUZZY TIME SERIES FORECASTING METHOD BASED ON GUSTAFSON-KESSEL * Faruk ALPASLAN a, Ozge CAGCAG b Abstract Fuzzy time series forecasting methods
More informationCellular Automata. and beyond. The World of Simple Programs. Christian Jacob
Cellular Automata and beyond The World of Simple Programs Christian Jacob Department of Computer Science Department of Biochemistry & Molecular Biology University of Calgary CPSC / MDSC 605 Fall 2003 Cellular
More informationMr. Harshit K. Dave 1, Dr. Keyur P. Desai 2, Dr. Harit K. Raval 3
Investigations on Prediction of MRR and Surface Roughness on Electro Discharge Machine Using Regression Analysis and Artificial Neural Network Programming Mr. Harshit K. Dave 1, Dr. Keyur P. Desai 2, Dr.
More informationIterative Laplacian Score for Feature Selection
Iterative Laplacian Score for Feature Selection Linling Zhu, Linsong Miao, and Daoqiang Zhang College of Computer Science and echnology, Nanjing University of Aeronautics and Astronautics, Nanjing 2006,
More informationAn Adaptive Clustering Method for Model-free Reinforcement Learning
An Adaptive Clustering Method for Model-free Reinforcement Learning Andreas Matt and Georg Regensburger Institute of Mathematics University of Innsbruck, Austria {andreas.matt, georg.regensburger}@uibk.ac.at
More informationHow to do backpropagation in a brain. Geoffrey Hinton Canadian Institute for Advanced Research & University of Toronto
1 How to do backpropagation in a brain Geoffrey Hinton Canadian Institute for Advanced Research & University of Toronto What is wrong with back-propagation? It requires labeled training data. (fixed) Almost
More informationEfficient Sensitivity Analysis in Hidden Markov Models
Efficient Sensitivity Analysis in Hidden Markov Models Silja Renooij Department of Information and Computing Sciences, Utrecht University P.O. Box 80.089, 3508 TB Utrecht, The Netherlands silja@cs.uu.nl
More informationSimple Neural Nets For Pattern Classification
CHAPTER 2 Simple Neural Nets For Pattern Classification Neural Networks General Discussion One of the simplest tasks that neural nets can be trained to perform is pattern classification. In pattern classification
More informationA thorough derivation of back-propagation for people who really want to understand it by: Mike Gashler, September 2010
A thorough derivation of back-propagation for people who really want to understand it by: Mike Gashler, September 2010 Define the problem: Suppose we have a 5-layer feed-forward neural network. (I intentionally
More informationAI Programming CS F-20 Neural Networks
AI Programming CS662-2008F-20 Neural Networks David Galles Department of Computer Science University of San Francisco 20-0: Symbolic AI Most of this class has been focused on Symbolic AI Focus or symbols
More information1. Introduction. 2. Artificial Neural Networks and Fuzzy Time Series
382 IJCSNS International Journal of Computer Science and Network Security, VOL.8 No.9, September 2008 A Comparative Study of Neural-Network & Fuzzy Time Series Forecasting Techniques Case Study: Wheat
More informationParallel layer perceptron
Neurocomputing 55 (2003) 771 778 www.elsevier.com/locate/neucom Letters Parallel layer perceptron Walmir M. Caminhas, Douglas A.G. Vieira, João A. Vasconcelos Department of Electrical Engineering, Federal
More informationNeural Networks Introduction
Neural Networks Introduction H.A Talebi Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Winter 2011 H. A. Talebi, Farzaneh Abdollahi Neural Networks 1/22 Biological
More informationEffect of number of hidden neurons on learning in large-scale layered neural networks
ICROS-SICE International Joint Conference 009 August 18-1, 009, Fukuoka International Congress Center, Japan Effect of on learning in large-scale layered neural networks Katsunari Shibata (Oita Univ.;
More informationA NEURAL FUZZY APPROACH TO MODELING THE THERMAL BEHAVIOR OF POWER TRANSFORMERS
A NEURAL FUZZY APPROACH TO MODELING THE THERMAL BEHAVIOR OF POWER TRANSFORMERS Huy Huynh Nguyen A thesis submitted for the degree of Master of Engineering School of Electrical Engineering Faculty of Health,
More informationNeural Networks: Introduction
Neural Networks: Introduction Machine Learning Fall 2017 Based on slides and material from Geoffrey Hinton, Richard Socher, Dan Roth, Yoav Goldberg, Shai Shalev-Shwartz and Shai Ben-David, and others 1
More information6.034f Neural Net Notes October 28, 2010
6.034f Neural Net Notes October 28, 2010 These notes are a supplement to material presented in lecture. I lay out the mathematics more prettily and etend the analysis to handle multiple-neurons per layer.
More informationMachine Learning. Neural Networks. (slides from Domingos, Pardo, others)
Machine Learning Neural Networks (slides from Domingos, Pardo, others) For this week, Reading Chapter 4: Neural Networks (Mitchell, 1997) See Canvas For subsequent weeks: Scaling Learning Algorithms toward
More informationMachine Learning. Neural Networks. (slides from Domingos, Pardo, others)
Machine Learning Neural Networks (slides from Domingos, Pardo, others) Human Brain Neurons Input-Output Transformation Input Spikes Output Spike Spike (= a brief pulse) (Excitatory Post-Synaptic Potential)
More informationThe Minimum Number of Hidden Neurons Does Not Necessarily Provide the Best Generalization
The Minimum Number of Hidden Neurons Does Not Necessarily Provide the Best Generalization Jason M. Kinser The Institute for Biosciences, Bioinformatics, and Biotechnology George Mason University, MSN 4E3
More informationNeural Turing Machine. Author: Alex Graves, Greg Wayne, Ivo Danihelka Presented By: Tinghui Wang (Steve)
Neural Turing Machine Author: Alex Graves, Greg Wayne, Ivo Danihelka Presented By: Tinghui Wang (Steve) Introduction Neural Turning Machine: Couple a Neural Network with external memory resources The combined
More informationSections 18.6 and 18.7 Artificial Neural Networks
Sections 18.6 and 18.7 Artificial Neural Networks CS4811 - Artificial Intelligence Nilufer Onder Department of Computer Science Michigan Technological University Outline The brain vs. artifical neural
More informationExtracting Provably Correct Rules from Artificial Neural Networks
Extracting Provably Correct Rules from Artificial Neural Networks Sebastian B. Thrun University of Bonn Dept. of Computer Science III Römerstr. 64, D-53 Bonn, Germany E-mail: thrun@cs.uni-bonn.de thrun@cmu.edu
More informationNeural Network to Control Output of Hidden Node According to Input Patterns
American Journal of Intelligent Systems 24, 4(5): 96-23 DOI:.5923/j.ajis.2445.2 Neural Network to Control Output of Hidden Node According to Input Patterns Takafumi Sasakawa, Jun Sawamoto 2,*, Hidekazu
More informationMcGill University > Schulich School of Music > MUMT 611 > Presentation III. Neural Networks. artificial. jason a. hockman
jason a. hockman Overvie hat is a neural netork? basics and architecture learning applications in music History 1940s: William McCulloch defines neuron 1960s: Perceptron 1970s: limitations presented (Minsky)
More informationMemories Associated with Single Neurons and Proximity Matrices
Memories Associated with Single Neurons and Proximity Matrices Subhash Kak Oklahoma State University, Stillwater Abstract: This paper extends the treatment of single-neuron memories obtained by the use
More informationCS 6501: Deep Learning for Computer Graphics. Basics of Neural Networks. Connelly Barnes
CS 6501: Deep Learning for Computer Graphics Basics of Neural Networks Connelly Barnes Overview Simple neural networks Perceptron Feedforward neural networks Multilayer perceptron and properties Autoencoders
More informationPractical implementation of possibilistic probability mass functions
Practical implementation of possibilistic probability mass functions Leen Gilbert Gert de Cooman Etienne E. Kerre October 12, 2000 Abstract Probability assessments of events are often linguistic in nature.
More informationNeural Networks (and Gradient Ascent Again)
Neural Networks (and Gradient Ascent Again) Frank Wood April 27, 2010 Generalized Regression Until now we have focused on linear regression techniques. We generalized linear regression to include nonlinear
More informationArtificial Neural Networks. Q550: Models in Cognitive Science Lecture 5
Artificial Neural Networks Q550: Models in Cognitive Science Lecture 5 "Intelligence is 10 million rules." --Doug Lenat The human brain has about 100 billion neurons. With an estimated average of one thousand
More information