Potential Function Explain of the Quick Algorithm of Synergetic Neural Network

Similar documents
Implementation of the Synergetic Computer Algorithm on AutoCAD Platform

Artificial Intelligence Hopfield Networks

Model predictive control of industrial processes. Vitali Vansovitš

Visual Selection and Attention Shifting Based on FitzHugh-Nagumo Equations

An artificial neural networks (ANNs) model is a functional abstraction of the

Enhancing Generalization Capability of SVM Classifiers with Feature Weight Adjustment

Neural Modelling of a Yeast Fermentation Process Using Extreme Learning Machines

) (d o f. For the previous layer in a neural network (just the rightmost layer if a single neuron), the required update equation is: 2.

Nonlinear dynamics & chaos BECS

Effects of Interactive Function Forms in a Self-Organized Critical Model Based on Neural Networks

Evolutionary Computation

Time series prediction

COMPARING PERFORMANCE OF NEURAL NETWORKS RECOGNIZING MACHINE GENERATED CHARACTERS

Defect Detection Using Hidden Markov Random Fields

A Dynamical Implementation of Self-organizing Maps. Wolfgang Banzhaf. Department of Computer Science, University of Dortmund, Germany.

Dynamic Data Modeling of SCR De-NOx System Based on NARX Neural Network Wen-jie ZHAO * and Kai ZHANG

Dropping the Lowest Score: A Mathematical Analysis of a Common Grading Practice

A Modified Incremental Principal Component Analysis for On-Line Learning of Feature Space and Classifier

Artificial Neural Networks Examination, March 2004

Eigenimaging for Facial Recognition

Artificial Neural Networks Examination, June 2005

TEMPERATURE DISTRIBUTION OF AN INFINITE SLAB UNDER POINT HEAT SOURCE

LINGO optimization model-based gymnastics team competition best starting line-up research

ACTA UNIVERSITATIS APULENSIS No 11/2006

Research on Complete Algorithms for Minimal Attribute Reduction

CSE 352 (AI) LECTURE NOTES Professor Anita Wasilewska. NEURAL NETWORKS Learning

A Modified Incremental Principal Component Analysis for On-line Learning of Feature Space and Classifier

Multi-Layer Boosting for Pattern Recognition

Nonlinear Controller Design of the Inverted Pendulum System based on Extended State Observer Limin Du, Fucheng Cao

CS 4700: Foundations of Artificial Intelligence

Discriminant Uncorrelated Neighborhood Preserving Projections

Using a Hopfield Network: A Nuts and Bolts Approach

The Development of Research on Automated Geographical Informational Generalization in China

Deep Learning Basics Lecture 7: Factor Analysis. Princeton University COS 495 Instructor: Yingyu Liang

CHALMERS, GÖTEBORGS UNIVERSITET. EXAM for DYNAMICAL SYSTEMS. COURSE CODES: TIF 155, FIM770GU, PhD

Gaussian Mixture Distance for Information Retrieval

Dominance of Old End Growth is Inherited in Fission Yeast

Deep Learning and Information Theory

A New Recognition Method of Vehicle License Plate Based on Genetic Neural Network

Adversarial Sequence Prediction

Competitive Learning for Deep Temporal Networks

Recurrence Enhances the Spatial Encoding of Static Inputs in Reservoir Networks

Classification and Clustering of Printed Mathematical Symbols with Improved Backpropagation and Self-Organizing Map

COMP 408/508. Computer Vision Fall 2017 PCA for Recognition

NEAREST NEIGHBOR CLASSIFICATION WITH IMPROVED WEIGHTED DISSIMILARITY MEASURE

Effects of Interactive Function Forms and Refractoryperiod in a Self-Organized Critical Model Based on Neural Networks

The Changing Landscape of Land Administration

Deep Neural Networks (1) Hidden layers; Back-propagation

This is a repository copy of Improving the associative rule chaining architecture.

CSC 4510 Machine Learning

Iterative Laplacian Score for Feature Selection

Intermediate Math Circles February 14, 2018 Contest Prep: Number Theory

Sparse Bayesian Logistic Regression with Hierarchical Prior and Variational Inference

In biological terms, memory refers to the ability of neural systems to store activity patterns and later recall them when required.

dynamical zeta functions: what, why and what are the good for?

Structure Learning of a Behavior Network for Context Dependent Adaptability

RESEARCH ON COMPLEX THREE ORDER CUMULANTS COUPLING FEATURES IN FAULT DIAGNOSIS

Pattern recognition. "To understand is to perceive patterns" Sir Isaiah Berlin, Russian philosopher

The Expectation-Maximization Algorithm

Multisets mixture learning-based ellipse detection

Deep Reinforcement Learning. STAT946 Deep Learning Guest Lecture by Pascal Poupart University of Waterloo October 19, 2017

Viewpoint invariant face recognition using independent component analysis and attractor networks

Hybrid particle swarm algorithm for solving nonlinear constraint. optimization problem [5].

Deep Neural Networks (1) Hidden layers; Back-propagation

NIR Spectroscopy Identification of Persimmon Varieties Based on PCA-SVM

ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92

Towards a Data-driven Approach to Exploring Galaxy Evolution via Generative Adversarial Networks

Neural Networks and the Back-propagation Algorithm

Neural Network Analysis of Russian Parliament Voting Patterns

ME 201 Engineering Mechanics: Statics. Unit 1.1 Mechanics Fundamentals Newton s Laws of Motion Units

An application of gradient-like dynamics to neural networks

Estimation of the Pre-Consolidation Pressure in Soils Using ANN method

Linear Algebra Background

Method for Recognizing Mechanical Status of Container Crane Motor Based on SOM Neural Network

State Space Vectors On Directionality Tools in Self-organizing Systems

RBF Neural Network Combined with Knowledge Mining Based on Environment Simulation Applied for Photovoltaic Generation Forecasting

Data Mining in Chemometrics Sub-structures Learning Via Peak Combinations Searching in Mass Spectra

A New Hybrid System for Recognition of Handwritten-Script

Land Use/Cover Change Detection Using Feature Database Based on Vector-Image Data Conflation

Decoding conceptual representations

Stability of Dynamical systems

Validity Analysis of Chaos Generated with Neural-Network-Differential-Equation for Robot to Reduce Fish s Learning Speed

Nonnegative Matrix Factorization Clustering on Multiple Manifolds

Price and Quantity Competition in Dynamic Oligopolies with Product Differentiation

Discriminative Direction for Kernel Classifiers

A First Attempt of Reservoir Pruning for Classification Problems

Number Theory: Niven Numbers, Factorial Triangle, and Erdos' Conjecture

COMP3702/7702 Artificial Intelligence Week1: Introduction Russell & Norvig ch.1-2.3, Hanna Kurniawati

A set theoretic view of the ISA hierarchy

Caracas, 1974: International Regulation of Ocean Ecology

Improving the Reliability of Causal Discovery from Small Data Sets using the Argumentation Framework

Classic K -means clustering. Classic K -means example (K = 2) Finding the optimal w k. Finding the optimal s n J =

Validity analysis of chaos generated with Neural-Network-Differential-Equation for robot to reduce fish s learning speed

Machine learning for pervasive systems Classification in high-dimensional spaces

1618. Dynamic characteristics analysis and optimization for lateral plates of the vibration screen

Period-doubling cascades of a Silnikov equation

Combination Methods for Ensembles of Multilayer Feedforward 1

Intelligent Handwritten Digit Recognition using Artificial Neural Network

Nonlinear dynamics in the Cournot duopoly game with heterogeneous players

Hierarchical Boosting and Filter Generation

Transcription:

Computer Science Technical Reports Computer Science 00 Potential Function Explain of the Quic Algorithm of Synergetic Neural Networ Jie Bao Iowa State University Follow this and additional wors at: http://lib.dr.iastate.edu/cs_techreports Part of the Artificial Intelligence and Robotics Commons Recommended Citation Bao, Jie, "Potential Function Explain of the Quic Algorithm of Synergetic Neural Networ" (00). Computer Science Technical Reports. 3. http://lib.dr.iastate.edu/cs_techreports/3 This Article is brought to you for free and open access by the Computer Science at Iowa State University Digital Repository. It has been accepted for inclusion in Computer Science Technical Reports by an authorized administrator of Iowa State University Digital Repository. For more information, please contact digirep@iastate.edu.

Potential Function Explain of the Quic Algorithm of Synergetic Neural Networ Abstract We can determine the winner pattern of the synergetic neural networ directly from order parameters and attention parameters when the attention parameters are equal or constant. In this paper, we explain that the basis of that quic algorithm is that the potential function of networ and attractive domain of each attractor are fully determined for given attention parameters, and there is an analytic approximation for the division of attractive domains. Disciplines Artificial Intelligence and Robotics This article is available at Iowa State University Digital Repository: http://lib.dr.iastate.edu/cs_techreports/3

Potential Function Explain of the Quic Algorithm of Synergetic Neural Networ Jie Bao Artificial Intelligence Research Lab, Department of Computer Science, Iowa State University, Ames, IA, 5000 USA baojie@cs.iastate.edu Abstract: We can determine the winner pattern of the synergetic neural networ directly from order parameters and attention parameters when the attention parameters are equal or constant. In this paper, we explain that the basis of that quic algorithm is that the potential function of networ and attractive domain of each attractor are fully determined for given attention parameters, and there is an analytic approximation for the division of attractive domains. Key Words: Neurally-Based Networ, Synergetics, Potential Function, Quic Haen Networ, Neural Networ. The Principle of Synergetic Neural Networ Synergetics is the science proposed by Haen in 973 to explain the phase transition and selforganization in non-equilibrium system. It was introduced into computer science in late 980s by Haen and brought about a new field: synergetic information processing, which includes synergetic neural networ, synergetic information theory and synergetic computer, etc. []-[5] The basic principle of synergetic neural networ is that the pattern recognition procedure can be viewed as the competition progress of many order parameters [] []. For a unrecognized pattern q we can construct a dynamic progress to mae q evolving into one of the prototype pattern v which closest to q(0) via some intermediate states q(t). This progress can be described as q(0) q(t) v. A dynamic equation can be given for a unrecognized pattern q[3]: M + + + + q& = λv ( v q) B ( v q) ( v q) v C( q q) q + F( t) () = Where q is the status vector of input pattern with initial value q 0, λ is attention parameter, v + is prototype pattern vector, v is the adjoint vector of v that satisfies + T + T (, v ) = v v = δ. () Order parameter is ξ + = v Corresponding dynamic equation of order parameters is q v ξ & ξ = λ ξ B ξ C ξ ξ (3) The strongest order parameter will win by competition and desired pattern will be recognized.. Quic algorithm of Haen Networ The classical Haen model has some weaness in that the iteration progress will cost a great deal of space and time resource in resolving high dimensional nonlinear equation sets when we have large volume of pattern[7]. For example, figure shows the evolution of 00 order parameters with initial value sampled randomly from (0,), iterative step length γ = 0. D [8], and all attention parameters are set to. The networ converges in around 750 steps. M =

Fig: the evolution of classical Haen networ: 00 patterns The time and space cost (omit all assistant blocs) during this iteration is Space cost: Order parameter 00 Global coupling D 00 Attention parameter 00 Time cost: Time of adding (00+)*750 Time of multiplying (00-) *750 In practice, we may encounter competition between more samples. For example, when retrieving an image database, we usually need to find a specific sample out of tens of thousand of or even more images. Apparently, classical Haen networ will have bad performance. There are two ways to solve this problem: ) Classify the patterns at matching layer via hierarchical competition. Competition occurs first between sub classes and then inside the recognized sub class. ) Improve the competition layer; simplify the competition progress and the space / time cost during this progress. It can be further divided into two sub methods:.) Reduce the time cost by judging the competition result form the initial condition of equation..) Reduce the space cost by transforming the differential equation sets into matrix differential equation expressed by sparse matrix using the fact that the similarities between most of patterns are low. In this paper we will only discuss about the method., that s, determining the survivability of order parameter directly by the initial conditions of the equation. By the discuss about the attention parameter, we can find that the system status can be directly determined by order parameters, attention parameters and the property of other parameters. In the flowing experiments we prove that the winner pattern can be distinguished out directly by the order parameter and attention parameter when the attention parameters are equal or constant. Literature [6] proves that: If all λ are equal, when ξ i < ξ j, we have & ξ i < & ξ j. So the evolutionary loci of all order parameters have no intersection, the largest initial order parameter will win and the networ will then converge. For any i, i M, if there exists j M that satisfies λ j λi > ( τ )ln( ξ i (0) / ξ j (0)) (4) λ jτ λiτ ξ (0) e > ξ (0) e ξ ( τ ) > ξ ( τ ) Then the order parameter corresponding to pattern i will be attenuated. j According to this theorem, we can choose winner order parameter and corresponding prototype vector based on attention parameter and initial value of order parameter. It can be used as a quic i j i

algorithm of synergetic neural networ. 3. Potential Function Explain of the Quic Algorithm Now we will try to give explain of the quic Haen networ algorithm from the viewpoint of potential function. The core of quic Haen networ is that after the attention parameter is determined the potential function and all attractors and attractive domains are determined too. The evolutionary direction of certain initial status is also decided by initial value of order parameters and attention parameter. Before the discuss we give following theorem [6] Theorem suppose B > 0, C > 0 (5) ( B + C) λi Cλ j > 0 i j M The static points in corresponding synergetic neural networ can be classified into () Zero point ξ = 0 is a unstable fixed point; V () M stable points at λ = 4C M (3) 3 M saddle point ξ ± λ C T = (0,...,,...,0) with potential function value of For lucidness, we only discuss the case of M=. The evolutionary equation of order parameter of two patterns is: & ξ = λξ ξ ξ ( ξ + ξ ) ξ & ξ = λξ ξ ξ ( ξ + ξ ) ξ (6) The potential function is V = λ ξ λ ξ + ξ ξ + ( ξ + ξ ) (7) 3. Balanced attention parameter Firstly let s observe the potential function under balanced attention parameter case when λ = λ.0 : = 4 B F C I A G E H D 3

Contour of potential function, x axis corresponds to ξ and y axis corresponds to ξ, both in the range of (-, +) Please note that there are 4 attractors with the value of - 0.5 in the figure (B,C,D,E); unstable fixed point (A), 4 saddle point (F,G,H,I). It fit the theory result exactly. Potential function. There are 4 apparent attractors in this figure that are axial symmetrical respectively. The hill at the original point is an unstable equilibrium point. Along the diagonal direction we can find 4 saddle points. The identity of the two attractor s shape represents the meaning of balanced attention parameter Fig: the potential function with balanced parameter 3. Unbalanced attention parameter 0.6 / 0.4 The potential function is shown as following when the attention parameter is unbalanced with λ =.6, λ 0.4 : 0 = The attractive domain of pattern is enlarged and the attractive domain of pattern is reduced. It illuminates that attention parameter has great influence on the attractive domain. Potential function. Note that both the width and the depth of the attractive domain of pattern now is greatly decreased compared to that of balanced attention parameter. Fig3: Potantial function with unbalanced attention parameter 0.6 / 0.4 3.3 unbalanced attention parameter 0.8 / 0. λ The potential function is shown as following when the attention parameter is unbalanced with =.8, λ 0. 0 = Note that the attractive domain of pattern now is very small because the current difference between two attention parameters is quit large Potential function. Please note that the well depth distance between two attractors now is quite large 4

Fig 4: Potantial function with unbalanced attention parameter 0.8 / 0. 3.4 Attractive domain of attractors Now we will compare the contour of potential function under these three cases to further observe attractive domain Balanced 0.6-0.4 0.8-0. We can observe the change of attractive domain from these figures. Without the lose of generality, we only discuss about the st quadrant and give the boundary of attractors. Balanced 0.6-0.4 0.8-0. Fig5: the contour and attactive domain of the potantial function The attractive domain of vis increasing when λ is increasing, and some patterns which will be attracted by v under balanced parameter case, can be attracted by v under the last two cases. Evolutionary locus of initial status (0.4, 0.5) is given for those three inds of conditions. 4. Conclusions These examples enable us to understand the principle of Quic Haen Networ much better. The potential function of networ and attractive domain of each attractor are fully determined for given attention parameters, and there is an analytic approximation for the division of attractive domains. For balanced attention parameters, since attractive domains are entirely axial symmetry, pattern will finally roll to the attractive domain corresponding to the maximal order parameter. For unbalanced attention parameters, although the attractive domains aren t entirely axial symmetry, they can also be entirely determined and controlled, and the evolution path of pattern with given initial value is determined too. The top-down nature of the potential function of synergetic neural networ maes the distribution of attractors and attractive domains is very regular and easy to observe. The boundary of attractive domain in synergetic networ is smooth. So we can easily determine the competition result based on the system initial status. However, we have very strong requirement of that the attention parameter should be much greater than the initial value of order parameter in the unbalanced attention parameter case. The ey 5

issue is how to depict the boundary of attractive domains precisely and analytically, which is possible for synergetic neural networ. That will be our future wor. Reference: [] H. Haen. Information and self-organisation. Berlin: Springer 988 [] Haen, R. Haas, W. Banzhaf. A New Learning Algorithm for Synergetic Computers, Biol. Cybern. 6, 07 - [3] H. Haen. Synergetic computer and cognition--a top-down approach to neural nets. Berlin: Springer 99 [4] Wang Feiyue. Robotic vision system for object identification and manipulation using synergetic pattern recognition. Robotic and Computer Integrated Manufacturing, 993,0(6):445~459 [5] A. Daffershofer., H. Haen. Synergetic Computers for Pattern Recognition- A New Approach to Recognition of Deformed Patterns. Pattern Recognition, Vol. 7, No., pp. 697-705 (994) [6] Hu Dongliang, Qi Feihu. Study on Unbalanced Attention Parameters in Synergetic Approach on Pattern Recognition. Acta Electronica Sinica, 999,7(5):4-7. (in Chinese) [7] Hu Dongliang, Qi Feihu. Reconstruction of Order Parameters in Synergetics Approach to Pattern Recognition. Journal of Infrared and Millimeter Waves. 98,7(3):77-8 (in Chinese) [8] Zhao Tong, Qi Feihu, Feng Jiong. Analysis of the Recognition Performances of Synergetic Neural Networ. Acta Electronica Sinica,000,8():74-77 (in Chinese) 6