Solving TSP Using Lotka-Volterra Neural Networks without Self-Excitatory

Similar documents
IN THIS PAPER, we consider a class of continuous-time recurrent

Computational Intelligence Lecture 6: Associative Memory

RECENTLY, many artificial neural networks especially

EXPONENTIAL STABILITY AND INSTABILITY OF STOCHASTIC NEURAL NETWORKS 1. X. X. Liao 2 and X. Mao 3

Time-delay feedback control in a delayed dynamical chaos system and its applications

Comparison of Simulation Algorithms for the Hopfield Neural Network: An Application of Economic Dispatch

In biological terms, memory refers to the ability of neural systems to store activity patterns and later recall them when required.

Effects of Interactive Function Forms and Refractoryperiod in a Self-Organized Critical Model Based on Neural Networks

The Traveling Salesman Problem: A Neural Network Perspective. Jean-Yves Potvin

Neural Networks. Solving of Optimization Problems 12/30/2015

Nature-inspired Analog Computing on Silicon

Self-Adaptive Ant Colony System for the Traveling Salesman Problem

ADAPTIVE CONTROL AND SYNCHRONIZATION OF A GENERALIZED LOTKA-VOLTERRA SYSTEM

( ) ( ) ( ) ( ) Simulated Annealing. Introduction. Pseudotemperature, Free Energy and Entropy. A Short Detour into Statistical Mechanics.

Visual Selection and Attention Shifting Based on FitzHugh-Nagumo Equations

Modified Hopfield Neural Network Approach for Solving Nonlinear Algebraic Equations

Analysis of stability for impulsive stochastic fuzzy Cohen-Grossberg neural networks with mixed delays

Zhang Neural Network without Using Time-Derivative Information for Constant and Time-Varying Matrix Inversion

Synchronization of a General Delayed Complex Dynamical Network via Adaptive Feedback

Hertz, Krogh, Palmer: Introduction to the Theory of Neural Computation. Addison-Wesley Publishing Company (1991). (v ji (1 x i ) + (1 v ji )x i )

7.1 Basis for Boltzmann machine. 7. Boltzmann machines

A dual neural network for convex quadratic programming subject to linear equality and inequality constraints

Design and Analysis of Maximum Hopfield Networks

Optimization Prof. A. Goswami Department of Mathematics Indian Institute of Technology, Kharagpur. Lecture - 20 Travelling Salesman Problem

Intelligent Modular Neural Network for Dynamic System Parameter Estimation

Stability Theory for Nonnegative and Compartmental Dynamical Systems with Time Delay

Research Article Mathematical Model and Cluster Synchronization for a Complex Dynamical Network with Two Types of Chaotic Oscillators

Persistence and global stability in discrete models of Lotka Volterra type

A Note to Robustness Analysis of the Hybrid Consensus Protocols

OBSERVABILITY AND OBSERVERS IN A FOOD WEB

Hopfield networks. Lluís A. Belanche Soft Computing Research Group

Physics: spring-mass system, planet motion, pendulum. Biology: ecology problem, neural conduction, epidemics

Results on stability of linear systems with time varying delay

Positive solutions of BVPs for some second-order four-point difference systems

In-Memory Computing of Akers Logic Array

Effects of Interactive Function Forms in a Self-Organized Critical Model Based on Neural Networks

HOPFIELD neural networks (HNNs) are a class of nonlinear

EXISTENCE AND EXPONENTIAL STABILITY OF ANTI-PERIODIC SOLUTIONS IN CELLULAR NEURAL NETWORKS WITH TIME-VARYING DELAYS AND IMPULSIVE EFFECTS

ADAPTIVE DESIGN OF CONTROLLER AND SYNCHRONIZER FOR LU-XIAO CHAOTIC SYSTEM

Stability Analysis for Linear Systems under State Constraints

Anti-synchronization of a new hyperchaotic system via small-gain theorem

Spin Glas Dynamics and Stochastic Optimization Schemes. Karl Heinz Hoffmann TU Chemnitz

Acceleration of Levenberg-Marquardt method training of chaotic systems fuzzy modeling

Research Article A Recurrent Neural Network for Nonlinear Fractional Programming

CONTROLLING IN BETWEEN THE LORENZ AND THE CHEN SYSTEMS

Intelligent Control. Module I- Neural Networks Lecture 7 Adaptive Learning Rate. Laxmidhar Behera

Chapter 3: Discrete Optimization Integer Programming

Hopfield Networks and Boltzmann Machines. Christian Borgelt Artificial Neural Networks and Deep Learning 296

Stability of interval positive continuous-time linear systems

Research Article Delay-Dependent Exponential Stability for Discrete-Time BAM Neural Networks with Time-Varying Delays

A SINGLE NEURON MODEL FOR SOLVING BOTH PRIMAL AND DUAL LINEAR PROGRAMMING PROBLEMS

Computation with phase oscillators: an oscillatory perceptron model

Ant Algorithms. Ant Algorithms. Ant Algorithms. Ant Algorithms. G5BAIM Artificial Intelligence Methods. Finally. Ant Algorithms.

An Evolution Strategy for the Induction of Fuzzy Finite-state Automata

A Stability Analysis on Models of Cooperative and Competitive Species

7.1 Creation and annihilation operators

arxiv:physics/ v1 [physics.bio-ph] 19 Feb 1999

SOLVING optimization-related problems by using recurrent

APPLICATION OF RECURRENT NEURAL NETWORK USING MATLAB SIMULINK IN MEDICINE

ROLE OF TIME-DELAY IN AN ECOTOXICOLOGICAL PROBLEM

Global Asymptotic Stability of a General Class of Recurrent Neural Networks With Time-Varying Delays

Global Stability Analysis on a Predator-Prey Model with Omnivores

THE PROBLEM of solving systems of linear inequalities

IMPULSIVE CONTROL OF DISCRETE-TIME NETWORKED SYSTEMS WITH COMMUNICATION DELAYS. Shumei Mu, Tianguang Chu, and Long Wang

ADAPTIVE FEEDBACK LINEARIZING CONTROL OF CHUA S CIRCUIT

The Hypercube Graph and the Inhibitory Hypercube Network

Research on Fault Location Algorithm with Traveling Wave Based on Determinant of Matrix A

Oscillation Criteria for Delay Neutral Difference Equations with Positive and Negative Coefficients. Contents

Artificial Neural Network : Training

Capacitor Placement for Economical Electrical Systems using Ant Colony Search Algorithm

CONVERGENCE BEHAVIOUR OF SOLUTIONS TO DELAY CELLULAR NEURAL NETWORKS WITH NON-PERIODIC COEFFICIENTS

Available online at ScienceDirect. Procedia Computer Science 22 (2013 )

Neural Networks. Prof. Dr. Rudolf Kruse. Computational Intelligence Group Faculty for Computer Science

Scheduling and Optimization Course (MPRI)

Dynamics Analysis of Anti-predator Model on Intermediate Predator With Ratio Dependent Functional Responses

Synchronization criteria for coupled Hopfield neural networks with time-varying delays

Hamiltonian(t) - An Ant-Inspired Heuristic for Recognizing Hamiltonian Graphs

PDC-based fuzzy impulsive control design with application to biological systems: predator-prey system

arxiv:nlin/ v1 [nlin.cd] 19 Oct 2006

fraction dt (0 < dt 1) from its present value to the goal net value: Net y (s) = Net y (s-1) + dt (GoalNet y (s) - Net y (s-1)) (2)

Neural Networks Lecture 6: Associative Memory II

Permanency and Asymptotic Behavior of The Generalized Lotka-Volterra Food Chain System

Assignment # 4. K-WTA neural networks are relatively simple. N neurons, with all the connection strengths equal to -1 (with no self-connections).

Existence of Positive Periodic Solutions of Mutualism Systems with Several Delays 1

Handout 2: Invariant Sets and Stability

STABILITY ANALYSIS OF DELAY DIFFERENTIAL EQUATIONS WITH TWO DISCRETE DELAYS

Computational Aspects of Aggregation in Biological Systems

15.083J/6.859J Integer Optimization. Lecture 2: Efficient Algorithms and Computational Complexity

Stability and bifurcation in a two species predator-prey model with quintic interactions

DISSIPATIVITY OF NEURAL NETWORKS WITH CONTINUOUSLY DISTRIBUTED DELAYS

arxiv: v1 [nlin.ao] 19 May 2017

Existence of permanent oscillations for a ring of coupled van der Pol oscillators with time delays

Optimization of Quadratic Forms: NP Hard Problems : Neural Networks

Sparse approximation via locally competitive algorithms

Permanence and global stability of a May cooperative system with strong and weak cooperative partners

LINEAR variational inequality (LVI) is to find

Data Structures in Java

Study on Proportional Synchronization of Hyperchaotic Circuit System

Synchronization and Bifurcation Analysis in Coupled Networks of Discrete-Time Systems

Stochastic Networks Variations of the Hopfield model

Transcription:

Solving TSP Using Lotka-Volterra Neural Networks without Self-Excitatory Manli Li, Jiali Yu, Stones Lei Zhang, Hong Qu Computational Intelligence Laboratory, School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 654, P. R. China {limanli, yujiali, leizhang, hongqu}@uestc.edu.cn bstract This paper proposes a new approach to solve Traveling Salesman Problems TSPs by using a class of Lotka- Volterra neural networks LVNN without self-excitatory. Some stability criteria that ensure the convergence of valid solutions are obtained. It is proved that a class of equilibrium states are stable if and only if they correspond to the valid solutions of the TSPs. That is, one can always obtain a valid solution whenever the network convergence to a stable state. set of analytical conditions for optimal settings of LVNN is derived. The simulation results illustrate the theoretical analysis. I. INTRODUCTION The TSP can be defined as: given a set of N cities, find the shortest path linking all the cities such that all cities are visited exactly once. The TSPs are typical problems of combinatorial optimization. The theoretical and practical insight of TSPs can often be useful in solving other combinatorial optimization problems. In fact, much progress in combinatorial optimization can be traced back to research of TSPs. It has been widely studied in mathematics and artificial intelligence communities. Since the seminal work of Hopfield and Tank [3], there has been an increased interest in applying Hopfield neural network to TSPs and many attempts have been made to try to solve this problem [4], [5], [6], [7], [8]. Many of them study the parameter settings of Hopfield neural network. Most of the study, however, have limitations such as the fine-tuning of the network coefficients, and invalidity of the obtained solutions and so on. In this paper, we employ a class of LVNN to solve TSPs. The well-known Lotka-Volterra model, which is a differential nonlinear system describing linear growths and quadratic interactions between variables, was first proposed to describe the predator-prey relationship in an ecosystem, and soon became well known and formed the basis of many important models in mathematical biology and population dynamics. It has also been found with successful and interesting applications in physics, chemistry, economics and other fields. The Lotka- Volterra model of recurrent neural networks [], which are derived from conventional membrane dynamics of competing neurons, providing a mathematical basis for understanding neural selection mechanisms, has found successful applications in winner-take-all problems [], [], [3]. The class of Lotka-Volterra neural networks discussed in this paper are without self-excitatory. We address several important properties such as the networks convergence to a set of specific states, which correspond to the permutation matrix. This property presents Winner-Take-ll WT within each row and column, which can be successful applied to TSPs. Some simple conditions are derived to guarantee the stability of this class of LVNN. We get a set of analytical conditions guaranteeing that any equilibrium point of the network characterizes a valid tour for the TSPs. series of experiments are carried out to verify the theoretical analysis. The simulation results show that the proposed network exhibits good performance in terms of solution quality. The rest of this paper is organized as follows. Section II presents the architecture and dynamics of the proposed network. In Section III, some important properties of the network and the principles for choosing its parameter value are discussed. The simulation results are given in Section IV. Finally, conclusions are drawn in Section V. II. SOME PRELIMINRIES ND TSP MPPING The TSP is an optimization task that arises in many practical situations. Let n be the number of cities and d ij be the distance between the cities i and j, i, j {,,,n}, then a valid solution of the TSPs can be presented by a n n permutation matrix, where each column and each row is associated respectively to a particular city and order in the tour. Let H = {X [, ] n n } and H C = {X {, } n n } be an unit hypercube of R n and its corner set, respectively. Given a fixed X H, defines i i x iα and S α α x iα as the sum of rows and columns. Then the set of valid tour of TSPs is H T = {X H C S i =,S α =, i, α =,,n}, and the invalid tour set H H T can be divided into two sets by H S = {X H α, β and i, s.t. x iα > and x iβ > }. and H R = {X H i, j and α, s.t. x iα > and x jα > }. It is clear that H T represents the set of valid tour for TSPs and H H T represents the set of invalid tour for TSPs. Meanwhile, H S H R = H H T and H S H R cannot always be. 978 444 674 5/8/$5. c 8 IEEE CIS 8

In this paper, the model of Lotka-Volterra neural networks without self-excitatory can be described by ẋ iα t = x iα t x iβ t B β= x jα t w ij x jα t w αβ x iβ t, β= where each x iα is the state of neuron v iα, w ij is the connection wight between neuron v iα and neuron v jα, w αβ is the connection wight between neuron v iα and neuron v iβ, w ij = w αβ if i = α and j = β. Note that w ij = w ji. It is clear that the third and forth term are constraint conditions, where the first and second term denotes the total cost to minimize. III. PERFORMNCE NLYSIS FOR NETWORK ssume the initial condition as x iα = φ iα >, i, α =,,n. Form [] we can easily know that each solution xt of with the initial condition x iα = φ iα >, i, α =,,n satisfies x iα t >, i, α =,,n for all t. Definition : The network is called completely stable, if each trajectory of converges to an equilibrium point. Lemma : Denote ω =max{ω ij }, if it holds that ρ = B nω >, i, α =,,n, the network is bounded, and the network is completely stable. Proof. See [].. Non-convergence of invalid solution Theorem : Suppose that w ii = for i =,,,n and w ij < max, B, i j. The equilibrium point is unstable if the equilibrium point not correspond to a valid tour of the TSP, namely that the equilibrium point is not a n n permutation matrix. Proof. Three steps will be used for the prove. Case x =. It is clearly that the linearization system of is given by ẋ iα = Bx iα, i, α =,,,n. Clearly its eigenvalues of the matrix are positive. Then, lim x iαt = lim x iαe Bt =. t t Fig.. The connections of the network. Then the network is associated with an energy function in which the lowest energy state corresponds to the shortest tour. This include two aspects. Firstly, the energy function must favor strongly stable states of the form of a permutation matrix. Secondly, all the solutions to the problem correspond to valid tour. We use the method in [] to obtain the following energy function E = w ij x iα x jα w αβ x iα x iβ α= i= n x iα B i= α= i= α= β= n x iα. α= i= So that x =is unstable. Case x H S. Suppose that x is an equilibrium point. we prove that if there are more than one elements larger than zero in a column, the equilibrium x is unstable. Suppose that there exist constants α and β such that x iα > >, then it must have,x iβ x x B B x jα x jβ w ij x jα w αγ x =, w ij x jβ w βγ x =.

It is sufficient to prove the following system ẋ iα = x iα x iα x iβ x,β B x iα w ii x iα x jα w ij x jα w αα x iα w αβ x iβ w αγ x,β ẋ iβ = x iβ x iα x iβ x B x iβ,β w ii x iβ x jβ w ij x jβ w ββx iβ w βα x iα w βγ x,β is unstable at x iα,x iβ. Consider the Jacobe matrix of the above system at x iα,x iβ J = J J, J J where J = B w ii w αα x iα, J = w αβ x iα, J = w βα x iβ, J = B w ii w ββ x iβ. Let λ, λ be the eigenvalues of matrix J, it is clear that λ λ = B w ii w αα x iα B w ii w ββ x iβ w αβ w βα x iα x iβ = x iαx [ iβ B w αβ ] <. It implies that there is at least one of λ i i =, must be positive, thus above system is unstable at x iα,x iβ. Case x H R. we discuss the case that there are two elements in the same row larger than zero. Suppose that there exist constants i and j such that x iα >,x jα >, then it must have x B w ik k= k= w αγ x =, x B k= w jk k= w αγ x =. To complete the proof, it is sufficient to prove the following system ẋ iα = x iα x iα x B x iα x jα k i,j w ii x iα w ij x jα w ik k i,j w αα x iα w αγ x ẋ jα = x jα x jα x B x iα x jα k i,j w ji x iα w jj x jα w jk k i,j w αα x iα w αγ x is unstable at x iα,x jα. Consider the Jacobe matrix of the above system at x iα,x jα J = J J J J, where J = B w ii w αα x iα, J = B w ijx iα, J = B w ji x jα, J = B w jj w αα x jα Let λ, λ be the eigenvalues of matrix J, it is clear that λ λ = B w ii w αα x iα B w jj w αα x jα B w ij x iα B w jix jα = x iαx [ jα B B w ij ] <. It implies that there is at least one of λ i i =, must be positive, thus above system is unstable at x iα,x jα. B. Convergence of valid solution Theorem : stable equilibrium point corresponds to a valid tour. That is, if x is a stable equilibrium point, then x H T.

Proof. Suppose the statement is not true, namely that there exist a stable equilibrium point corresponding to an invalid tour. Then there must be at least two elements within a column or row larger than. From Theorem 3 we know that this equilibrium point is not stable, this poses a contradiction to the assumption. Hence a stable equilibrium point is corresponding to a valid tour of TSPs. IV. SIMULTION RESULT The convergence conditions of network have been discussed above. To verify the theoretical result and the effectiveness of the network applied to TSPs, the following experiments have been carried out. Set initial states x ij [, ], i, j =,,, 8,n = 8, =,B =and the connection inhibition matrix W = w ij, i, j =,,, 8 as follows W =.336.34.36.5.576.98.4564.336.7.649.847.883.585.648.34.7.5349.799.87.594.698.36.649.5349.3397.658.57.7375.5.847.799.3397.4579.459.6686.576.883.87.658.4579.74.937.98.585.594.57.459.74.77.4564.648.698.7375.6686.937.77.4.4...8.8.. 5 5 5 3 5 5 5 3 Fig.. Trajectories of the network in a random column. Fig. 3. Trajectories of the network in a random row. Fig. and Fig. 3 show the trajectories of the variables in a random row and column, respectively. Fig. 4 show the trajectories of all variables. Clearly it follows the rule WT within each row or column, however, it shows a winner-shareall phenomenon in the global view. Those properties illustrated in the figures are not the same as the properties in []. Fig., 3, 4 correspond to a permutation matrix which column and row are associated respectively to a particular city and order in the tour of TSPs. TBLE I -CITY PROBELM i 3 4 5 6 7 8 9 X 3 4 5 3 4 5 Y 3 4 3 4 3 The second experiment is on a -city problem which was TBLE II PERFORMNCE OF LVNN WITH DIFFERENT PRMETER SETTING Good% Min Length ve Length = B =. 3% 5.337 3.977 = B =.5 7% 5.337 3.63 presented in [8]. Its city coordinates are shown in Table I. Its minimal tour length is 5.337. The initial states were randomly generated by x iα,. The item good indicated the number of tours with distance within 5% of the optimum distance. Simulation results are shown in Table II, where data of the two rows show the performance of the proposed network. The network gets the shortest tour with different parameter setting shown in the table. Fig. a and b are two optimum solutions get by LVNN with varied parameters, respectively.

.4. [9] H. Tang, K. C. Tan and Zhang Yi, columnar comprtitive model for solving combinatorial optimization problems, IEEE Trans. Neural Netw., vol. 5, no. 6, pp. 568-573, 4. 4 5.337.8 3.5 3..5 5 5 5 3 Fig. 4. Trajectories of all the variables. V. CONCLUSIONS In this paper, a class of LVNN is used to solve the TSPs. The dynamics of the networks are analyzed in terms of stability of specific equilibrium points, including the conditions derived for the convergence. The WT within each row and column enable the networks can solve TSPs. The theoretical results have been illustrated with a series of experiments. These analytical conditions depend upon all the lateral inhibitory of the network, including the parameters of the network. For simplicity we consider the parameters equal to B in the simulations in this paper. If the coefficient and B are different, the model would be more better, more research on this issue is required..5.5.5 3 3.5 4 4.5 5 4 3.5 3.5 a 5.337 CKNOWLEDGMENT This work was supported by Chinese 863 High-Tech Program Under Grant 7Z3..5 REFERENCES [] Z. Yi and K. K. Tan, Dynamic stability conditions for Lotka-Volterra recurrent neural networks with delays, Phys. Rev., E, vol. 66, 9,. [] Z. Yi and K. K. Tan, Convergence analysis of recurrent neural networks, Kluwer cademic Publishers, Boston, 4. [3] J. J. Hopfield and D. W. Tank, Neural computation of decisions in optimization problem, Biol. Cybern., vol. 5, pp. 4-5, 985. [4] S. Matsuda, Optimal Hopfield network for combinatorial optimization with linear cost function, IEEE Trans. Neural Netw., vol. 9, no. 6, pp. 45-97, 998. [5] P. M. Talavan and J. Yanez, Parameter setting of the Hopfield network applied to TSP, Neural Netw., vol. 5, pp. 363-373,. [6] Y. Tachibana, G. H. Lee, H. Ichihashi and T. Miyoshi, simple steepest descent method for minimizing Hopfield energy to obtain optimal solution of the TSP with reasonable certainty, IEEE Int. Conf. Neural Netw., pp. 87-875, 995. [7] M. Peng, N. K. Gupta and. rmitage, n investigation into the improvement of local minima of the Hopfield network, Neural Netw., vol. 9, no. 7, pp. 4-53, 996. [8] K. C. Tan, H. Tang and S. S. Ge, On parameter settings of Hopfield networks applied to Traveling Salesman Problems, IEEE Trans. Circuits Syst. I, Reg. Papers, vol. 5, no. 5, pp. 994-, 5..5.5 3 3.5 4 4.5 5 b Fig. 5. The two Figs. presents the two optimal tours get by the proposed network. the parameters setting of a are =.5,B =.5,w ij = d ij ; the parameters setting of b are =.,B =.,w ij = d ij.; [] H. Qu, Z. Yi and Huajin Tang, Improving local minima of columnar competitive model for TSPs, IEEE Trans. Circuits Syst. I, Reg. Papers, vol. 53, no. 6, pp. 353-36, 6. [] T. Fukai and S. Tanaka, simple neural network exhibiting selective activation of neuronal ensembles: from winner-take-all to winner-shareall,neural computation, vol. 9, pp. 77-97, 997. [] T. sai, T. Fukai and S. Tanaka, subthreshold MOS circuit for the Lotka-Volterra neural network producing the winner-share-all solution, Neural Netw., vol.. pp. -6, 999. [3] T. sai, M. Ohtani and H. Yonezu, nalog integrated circuits for the Lotka-Volterra competitive neural networks, IEEE Trans. Neural Netw., vol., no. 5. pp. -3, 999. [4] X. Zhang and L. Chen, The globle dynamic behavior of the competition model of three specied, J. Math. nal. vol. 45, pp. 4-4,