Design and Analysis of Maximum Hopfield Networks

Size: px
Start display at page:

Download "Design and Analysis of Maximum Hopfield Networks"

Transcription

1 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 2, MARCH Design and Analysis of Maximum Hopfield Networks Gloria Galán-Marín and José Muñoz-Pérez Abstract Since McCulloch and Pitts presented a simplified neuron model in 1943, several neuron models have been proposed. Among them, the binary maximum neuron model was introduced by Takefuji et al. and successfully applied to some combinatorial optimization problems. Takefuji et al. also presented a proof for the local minimum convergence of the maximum neural network. In this paper we discuss this convergence analysis and show that this model does not guarantee the descent of a large class of energy functions. We also propose a new maximum neuron model, the optimal competitive hopfield model (OCHOM), that always guarantees and maximizes the decrease of any Lyapunov energy function. Funabiki et al. applied the maximum neural network for the -queens problem and showed that this model presented the best overall performance among the existing neural networks for this problem. Lee et al. applied the maximum neural network for the bipartite subgraph problem showing that the solution quality was superior to that of the best existing algorithm. However, simulation results in the -queens problem and in the bipartite subgraph problem show that the OCHOM is much superior to the maximum neural network in terms of the solution quality and the computation time. Index Terms Bipartite subgraph problem, combinatorial optimization, competitive Hopfield model, maximum neural network, -queens problem, winner-take-all. I. INTRODUCTION THE FIRST neural network for combinatorial optimization problems was introduced by Hopfield and Tank in 1985 [1], [2]. Since then neural networks have proved effective in dealing with many combinatorial optimization problems [4] [15]. It has been shown that the neural techniques can compete effectively with more traditional heuristic to real-world combinatorial optimization problems, such as car sequencing problems and postal delivery problems [21]. The goal of neural network approaches to combinatorial optimization is to formulate the desired objective function being optimized, such that it can be viewed as a natural energy minimization problem. Although the Hopfield networks implement a gradient descent method they should not be viewed as naive gradient descent machines, but as an ensemble of interconnected processing units with simple computational requirements that can implement complex computation (inspired in many natural phenomena). Thus, the true advantage of using Hopfield-type neural networks to solve difficult optimization problems relates to speed considerations. Due to their inherently parallel struc- Manuscript received August 23, 1999; revised October 25, This work was supported in part by the Comisión Interministerial de Ciencia y Tecnología (CICYT) under Grant TAP G. Galán-Marín is with the Departamento de Matemática Aplicada, E.T.S.I. Telecomunicación, Universidad de Málaga, Málaga, Spain ( ggalan@ctima.uma.es). J. Muñoz-Pérez is with the Departamento de Lenguajes y Ciencias de la Computación, E.T.S.I. Informática, Universidad de Málaga, Málaga, Spain ( munozp@lcc.uma.es). Publisher Item Identifier S (01) ture and simple computational requirements, neural network techniques are especially suitable for direct hardware implementation, using analog or digital integrated circuits [22], or parallel simulations [23]. Moreover, the Hopfield neural networks have very natural implementations in optics [24]. Thus, they are considered to hold much potential for rapid execution speed through their hardware implementation. The Hopfield network has demonstrated that a distributed system of simple processing elements can collectively solve optimization problems. However, the original Hopfield network generates poor quality and/or invalid solutions and so a lot of different approaches have been proposed. These techniques have demonstrated significant improvement in performance over the generic Hopfield network. It is crucial to incorporate as many problem-specific constraints as possible into the structure of the neural network to improve scalability by limiting the search space. In this way, the Maximum Hopfield Network has been successfully introduced by Takefuji et al. [3] and Lee et al. [9] to handle a class of NP-complete optimization problems, which used to be hard to solve by a neural network. This model has shown to provide powerful approaches for combinatorial optimization problems [9] [12] and for polygonal approximation [19]. The operation of the maximum network is based on the notion of group update. The network is composed of groups where each group consists of binary neurons. One and only one neuron in each group has one as its output. The goal of the neural network is to minimize the energy function which represents the objective function and the constraints of the problem. The dynamics of the network must be determined so that the energy function is decreased as the groups of neurons are updated. In the maximum neural network a competitive winner-take-all rule is imposed for the updating of neurons and then the neuron with the maximum input per group always has nonzero output. In this way, the input output function of the th group is given by th neuron in the if otherwise where and are the output and the input of the th neuron, respectively. Takefuji et al. [3] considered that the change of the input state of the th neuron was given by the motion equation In this paper we will show that the updating rule (1) applied by Takefuji et al. to the binary maximum neuron model can guarantee that the energy is monotonically decreased only for some special energy functions. However, this result is not valid for a large class of energy functions. (1) /01$ IEEE

2 330 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 2, MARCH 2001 We also propose in this paper a novel maximum neuron model, namely the optimal competitive Hopfield model (OCHOM), that always guarantees and maximizes the descent of any Lyapunov energy function as the groups of neurons are updated. In our model we consider that the input of the neuron is computed by the original Hopfield s updating rule instead of by (1). As Wang [17] pointed out, the updating rule for discrete neurons given by (1) that provides instead of itself, is computationally inefficient since a neuron may not be able to update its state until it has accumulated enough input during several evaluations of (1). For this reason, even in cases in which both models guarantee the descent of the energy function, the OCHOM reaches the energy minimum more rapidly and escapes from local minima easier. The effectiveness of the maximum neuron model was shown by Lee et al. through the bipartite subgraph problem [9] and the module orientation problem [10] and by Chung et al. through polygonal approximation [19]. When the maximum neuron model is applied to these problems, it is only required the optimization of a special objective function. We will show that for this reason the energy descent is also guaranteed with the maximum neuron model in these examples. Recently, Funabiki et al. [12] have first applied the maximum neuron model to constraint satisfaction problems. They selected the -queens problem because several neural networks have been proposed for it [11] [15]. Thus, it has no difficulties to test in this problem whether the state of the system is in the global minimum or not, because it has always. Funabiki et al. [12] demonstrated that their network showed the far better performance among the existing neural networks. However, in order to avoid local minimum convergence they had to apply heuristic methods such as the input saturation heuristic and the hill-climbing term. One mayor problem with this heuristics is the lack of rigorous guidelines in selecting appropriate values of the parameters used in them. Recently, we [15] have presented a new input output function for the binary sequential Hopfield model with an application to the -queens problem. Through simulation results we showed that this neural network converges to global optimal solutions for this problem without the help of heuristic methods. However, since the operation of this sequential model is based on the notion of single update, the required number of iteration steps for the convergence is increased in proportion to the chessboard size. Then, although we have found solutions in up to the 250-queens problem, for very large-scale networks the global minima is difficult to achieve because the computation time is considerably increased. We present in this paper an application of a new maximum neuron model, the OCHOM, to the -queens problem. Since this model is based on the notion of group update, we have observed that the computation time is decreased even 100 times for large-scale networks comparing to the sequential model presented in [15]. Moreover, simulation results in up to the queens problem show that the OCHOM without the help of heuristic methods performs better than the best known neural network of Funabiki et al. [12] in terms of the solution quality and the computation time. We also present in this paper an application of the OCHOM to a well-known NP-Complete problem, the bipartite subgraph problem. The significantly advantage of applying the maximum network or the OCHOM to this problem is that no parameter affects the global minimum search. Lee et al. [9] have presented for this problem a neural-network algorithm based on the maximum neural network. They showed that it performed better than the best known algorithm in terms of the solution quality. However, massive simulation runs show that the OCHOM in this problem is computationally superior to the maximum neural network. II. DESIGN OF A NEW MAXIMUM NEURAL NETWORK: THE OCHOM Let be a neural network with neurons, where each neuron is connected to all the other neurons. The state of neuron is denoted by and its bias by, for ; is a real number that represents the interconnection strength between neurons and, where symmetric weights are considered, for ; observe that we allow arbitrary values of the self-connections for each neuron. The Lyapunov function of the neural network is given by For any change of the state of any neurons of the network it is easy to show that the energy difference resulting is If we consider discrete-time dynamics then where denotes discrete time. When the inputs of the neurons are computed by it follows that Let us consider now that the network is partitioned into disjoint groups, where each group is composed of neurons, such that. (2) (3) (4)

3 GALÁN-MARÍN AND MUÑOZ-PÉREZ: DESIGN AND ANALYSIS OF MAXIMUM HOPFIELD NETWORKS 331 We introduce now the notion of group update, that is, instead of selecting a single neuron for update we can select a group containing a number of neurons. Then, the difference in the energy,, that would result if only the states of the neurons in the group was altered is We propose now a generalized definition of a competitive Hopfield model (CHOM). Definition 1: Let be a binary (1/0) neural network characterized by a Liapunov energy function (2) where the inputs of the neurons are computed by (4). If the network is partitioned into disjoint groups we shall say that is a CHOM if one and only one neuron per group has one as its output at every time. Theorem 1 (Energy Reduction Condition for the CHOM): Let be a CHOM in which only one group is selected for updating at time. Let be the neuron in group with the output one at time. Then the energy is guaranteed to decrease if and only if the candidate neuron in group that will have the output one at time satisfies where. Proof: Since we are assuming that at time the neuron is the only one that is on in group we have that If neuron is the candidate neuron in group that is going to be on at time then (5) (6) Corollary 1 (Optimal Dynamics of the CHOM): Let be a CHOM in which only one group is selected for updating at every time. Let be the neuron in group with the output 1 at time. If the dynamics of the CHOM are given by if otherwise (8), then the energy where decrease is maximized at every time. Proof: From (7) the energy difference if only one group is updated at time is Hence, we have that if the neuron with the maximum value of per group is always selected as the candidate neuron, then the energy descent is guaranteed since the condition is satisfied. Thus, the absolute value of the energy decrease is the maximum possible at every time. Definition 2: Let be a CHOM. We shall say that is an optimal competitive Hopfield model (OCHOM) if the dynamics of the network are defined by (8). Corollary 2: Let be an OCHOM. If for every two neurons and included in the same group, then the dynamics of the network are reduced to if otherwise that is, the maximum neuron model of Takefuji et al. (9) From these conditions we get that if it follows that III. ANALYSIS OF THE EXISTING MAXIMUM NEURAL NETWORKS Takefuji et al. [3] considered that the change of the input state of a neuron was given by the motion equation (10) Observe that if then, and no updating is made at time. By substituting this values we have from (5) that The proof given in [3], [9], [10] states that if the change of is given by (10) in the maximum neuron model (9), then the condition is satisfied. This proof uses the chain rule to compute the derivative as follows: Hence, it follows from the above expression that the necessary and sufficient condition for is given by (6). Next, we propose the dynamics of the CHOM in order to satisfy the energy reduction condition (6) stated in theorem 1. (7) (11) However, Tateishi and Tamura [16] pointed out that, since the output of the neuron is discrete, it is not possible to compute the partial derivative

4 332 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 2, MARCH 2001 and consequently the derivative (11). Then, there is an error in the proof and so this model can not guarantee that the energy function is always monotonically decreased. In spite of this, Takefuji et al. [9] [12] have obtained remarkable solutions for some practical optimization problems by applying this model. The reason is that in practice, in order to numerically solve the differential equation (10) they use the first-order Euler method in the form Then, for a general Liapunov energy function (2) the resulting updating rule that they use for discrete neurons is based on (12) instead of the Hopfield s updating rule (4) in which is based the OCHOM. Next, we will show how this updating rule (12) with the maximum neuron model of Takefuji et al. can guarantee the descent of some special energy functions. Let us consider the energy change (3) of a general network characterized by a Lyapunov energy function. When the updating rule for discrete neurons (12) is applied it follows from (3) that the energy difference of any neural network is expressed as Consider now that is a binary (1/0) neural network partitioned into groups in which only one group is selected for updating at time. Suppose again that neuron is the neuron with the output one at time and that neuron is the one that is going to have the output one at time. Hence, it is easily shown that (13) where. In this way, the necessary and sufficient condition for by applying the updating rule of Takefuji et al. (12) is In the maximum neuron model the input output function for the neuron of the group is given by if otherwise. Then, observe that since the neuron is the one with the output one at time, it implies. Similarly, since the neuron is the one with the output one at time it implies. Hence, if the necessary and sufficient condition for is rewritten in the form it implies that if for every two neurons in the same group, then the general Lyapunov energy function monotonously decreases at every time by using the maximum neuron model (9) and the updating rule (12) of Takefuji et al. Next, we present some examples in which the maximum neuron model has been successfully applied. We also explain why this model is effective in these problems when it is considered sequential updating of the groups, that is, when only one group is selected for updating at every time. A. The Bipartite Subgraph Problem Lee et al. [9] proposed a maximum neural network model for the bipartite subgraph problem. The goal of this NP-complete problem is to find a bipartite subgraph with the maximum number of edges of a given graph. The energy function based on a maximum neural network with groups of two neurons is given by is the adjacency matrix, where if there is an edge between vertex and vertex and otherwise. if vertex belongs to cluster and zero otherwise. Observe that, since we have there are no terms or terms in the energy function. Then, there are no self-connections in the network and no interconnections between neurons that belong to the same group. Consequently for every two neurons and in the same group and so the energy decreases on every step by applying the maximum neural network of Takefuji et al. in this problem. B. The Module Orientation Problem Lee et al. [10] proposed a maximum neural network model for this problem that performed better than the best known algorithm. The goal of this NP-complete problem is to minimize the total wire length where the summation is carried out over all pairs of pins which belong to the same net. The energy function based on the maximum neural network is given by where denotes the total length of all wires between the module in the th orientation and the th module in the th orientation. if the th module is in the th orientation and zero otherwise. Observe again that, since we have there are no self-connections in the network and no interconnections between neurons that belong to the same group. Consequently for every two neurons and in the same group and so the energy decreases on every step by applying the maximum neural network of Takefuji et al. in this problem. C. Polygonal Approximation P. Chung et al. [19] proposed a CHNN for polygonal approximation. The input output function for every column was given by the maximum neuron model of Takefuji et al. and the inputs

5 GALÁN-MARÍN AND MUÑOZ-PÉREZ: DESIGN AND ANALYSIS OF MAXIMUM HOPFIELD NETWORKS 333 of the neurons were computed by Hopfield s updating rule. The energy function is defined by the connections weights and the biases are derived. If we substitute them in the Hopfield s updating rule we get Since there are no terms and no terms, then for every two neurons belonging to the same group (column). Hence, the energy reduction condition (6) is satisfied. IV. SIMULATION RESULTS A. The -Queens Problem Many practical optimization problems, such as the three described above, can be represented and solved using a two-dimensional (2-D) network. For any state which represents a valid solution of the problem, a certain number of neurons should be on in each column and/or row of the network. Making use of the valid solution constraint, an efficient partition of the network into groups can be developed. For example, for the -queens problem a valid solution requires that one and only one queen must be located per row and per column and no more than one queen must be located in any diagonal line. The chessboard is represented by a 2-D network, where the binary output of the th neuron means a queen is assigned in the th row and the th column, and otherwise. Then, since there must be one and only one neuron on in each column and row of the network, the groups for the CHM can be constructed such that every group is a row of the network or such that every group is a column of the network, for example. In this way the row constraint or the column constraint can be removed from the energy function because one of them will be always satisfied in this model. By following the maximum neural network of Funabiki et al. [12] we consider that every row of the network is a group of the model. The energy function is given by By comparing the energy function (14) defined for the problem and the Hopfield energy function (14) -queens (15) Note that, the bias for every neuron is, the neural selfconnection is and the interconnection between neurons that belong to the same group (row) is. Consequently for every two neurons and in the same group (row) if and. Let us assume that on step the neuron is the only one that is on in group and that neuron is the candidate neuron in group to be on on the next step. Hence, the difference in the energy that would result if only the states of the neurons in the group was altered is where. The following procedure describes the proposed algorithm based on the OCHOM. 1) Set the initial state of the competitive Hopfield model by randomly setting the output of one neuron in each group (row) to be one and all the other neurons in the group (row) to be zero. 2) Evaluate the initial value of the energy function (14). 3) Select a group (row). 4) Compute the inputs of the neurons in the group (row),, by (15), for. 5) Select the activated neuron in the group (row) and select the neuron with the maximum input per group (row). 6) If, then and ; else since no updating is made. 7) Repeat from Step 3) until. On Step 3) we select a group (row) randomly or easier, we follow. On Step 5), if there are different neurons in row with the maximum value of, the algorithm must randomly select one of them. However, for simplicity, it selects the first neuron in the group (row) with the maximum value of. The coefficients in the energy function are selected, as in the neural network of Funabiki et al. A total of 12 chessboards sizes from 8 to 2000 is considered in our simulations on an Origin 2000 Computer (Silicon Graphics Inc.). Fig. 1 shows a random binary initial state of the CHOM for the 50-queens problem and the optimal solution that

6 334 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 2, MARCH 2001 Despite the fact that the energy decrease is not guaranteed with this model in this problem, since, simulation runs show that the energy is never increased in any chessboard size. Next, we explain why the energy always decreases in this network for the -queens problem, even with, due to the special implementation of the function max in the maximum neuron model. From (13), the energy change is Fig. 1. A random binary initial state of the CHOM for the 50-queens problem and the optimal solution that we obtain from it. we obtain from it. In each size, 100 simulation runs from different random initial states of the Competitive Hopfield Model are performed. In all of them the OCHOM always converges to a global minimum excepting for the very small sizes 8 10 in which the convergence rate is near 90%. As Funabiki et al. [12] pointed out, small sizes represent less competitive problems and it is more difficult for the neural network to escape from a local minimum. For this reason our sequential model presented in [15] performs better than the OCHOM in these cases. However, even for small sizes the computation time is significantly reduced with the OCHOM. If we apply in this problem the maximum neuron model of Takefuji et al. with the Hopfield s updating rule it is easily shown that the energy decrease is not guaranteed, since for every two neurons and belonging to the same group (row). We have also implemented this model and carried out several simulation runs to confirm this. As in the OCHOM, we consider sequential updating of the groups. Fig. 2 shows the evolution of the energy function on every iteration step in this case, for, 20 and 50, respectively, where one step means that a row is considered for updating. Observe then, how the energy function decreases and increases alternately. Funabiki et al. [12] proposed a neural network for the -queens problem based on the maximum neuron model and updating rule of Takefuji et al. They presented an algorithm for three computation modes: -parallel, -parallel and sequential. Their simulation results show that the network performs best on the -parallel mode, that is, sequential updating of the groups. Note that this is the computation mode of the OCHOM. The -parallel mode consists of parallel updating of the groups, that is, all the rows are selected for updating at time. We have implemented the maximum neuron model and updating rule of Takefuji et al. in the -parallel mode for the -queens problem. Simulation results show that, although the energy has a general tendency to decrease accentuated in large sizes, the energy function can be increased and decreased alternately. Fig. 3 shows the evolution of the energy function on every iteration step in this -parallel mode, for and, respectively, where one step means that all the rows are considered for updating. Finally, we have implemented the maximum neuron model and updating rule of Takefuji et al. for the -queens problem in the -parallel mode, that is, sequential updating of the rows. where if and if. Observe that, from the maximum neuron model we have and. Then, since Funabiki et al. set and all the are integer numbers, we have that if and only if and, with. In this case the energy change that would result is. However, this case never occurs in the neural network of Funabiki et al. because the function max of their maximum neuron model always returns the first argument with the maximum value of. In this way, if it means that, since the neuron is the one with the output one at time in the row. Then, if the neuron with the output one at time will be again neuron, because the function max returns the first neuron in the row with the maximum value of. Hence, since, no updating is made and. It is concluded that if the function max is implemented in this model such that it selects randomly one of the neurons with the maximum value of, then the energy descent is not guaranteed in this problem. Several simulation runs in each size from different initial states were performed for the implemented -parallel neural network of Funabiki et al. without heuristics, that is, the maximum neuron model and updating rule of Takefuji et al. with sequential updating of the rows. Simulation results show that, although the energy descent is guaranteed as we have shown, the network is usually trapped in local minima. Thus, if the network converges to a global minima the required number of iteration steps is much smaller in the OCHOM, when the same initial state is used. Observe that an iteration step of the OCHOM requires an equivalent number of operations than an iteration step of Funabiki s network without heuristics, and then an equivalent computation time. For comparison, the evolution of the energy function on every iteration step for both models is represented in Fig. 4, where one step means that a row is considered for updating. The dashed line represents the values of the energy in Funabiki s neural network without heuristics and the solid line the values of the energy in the OCHOM. We consider the same randomly selected initial state for both models in every size and. Note that in the examples of and the OCHOM reaches the global minimum in 82 and 253 steps, respectively, while the other model is still trapped in a local minimum for 5000 and 7000 steps, respectively. In the examples of and, although both models reach the global minimum, the OCHOM requires 33 and 1036 steps, respectively, and the other one 1438 and 5046 steps, respectively.

7 GALÁN-MARÍN AND MUÑOZ-PÉREZ: DESIGN AND ANALYSIS OF MAXIMUM HOPFIELD NETWORKS 335 Fig. 2. maximum neuron model of Takefuji et al. with Hopfield s updating rule for the n-parallel mode, that is sequential updating of the groups. Graphical representation of the values of the energy function on every iteration step in the n-queens problem for n = 10, 20 and 50, applying the Fig. 3. Graphical representation of the values of the energy function on every iteration step in the n-queens problem for n =10, 20 and 50, applying the updating rule and the maximum neuron model of Takefuji et al. for the n -parallel mode, that is, parallel updating of the groups. In order to avoid the local minima convergence of the maximum neuron model and updating rule of Takefuji et al. in this problem, two heuristics are added by Funabiki et al. One of them is the input saturation heuristic and the other is the hill-climbing term, where the parameters used in them are selected by trial and error, because no method is given. Table I compares the simulation results obtained by applying the OCHOM with the simulation results described by Funabiki et al. in [12] for the -parallel mode, in which their network performs best. In each size 100 simulation runs are performed to calculate the convergence rate and the average number of iteration steps. Note that here, an iteration step is considered when all the rows have been updated. Observe that an iteration step of the OCHOM requires less operations and computation time than an iteration step of Funabiki s model, since in this one the two heuristics must be applied on every step. It can be observed that, although the convergence rate obtained by Funabiki et al. by adding the two heuristics is always near 100%, the required number of iteration steps for convergence and then the computation time are significantly reduced in the OCHOM. This is due to the fact that the Hopfield s updating rule used in the OCHOM is computationally more efficient, since it provides instead of as in Takefuji s updating rule. Finally, observe that Funabiki et al. presented simulation results in up to the 500-queens problem and we have found solutions in up to the 2000-queens problem. B. The Bipartite Subgraph Problem The bipartite subgraph problem is defined as follows [20]. Instance: Graph, positive. Question: Is there a subset with such that is bipartite? This problem can be transformed into the following optimization problem [9]: minimize subject to

8 336 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 2, MARCH 2001 TABLE I SIMULATION RESULTS represented and solved using a neural network, where the binary output of the th neuron means that vertex belongs to the subgraph, for, and otherwise. Lee et al. [9] proposed a neural-network algorithm based on the maximum neural network. In this model, as in the OCHOM, one and only one neuron per group must have one as its output. In this way, if it is considered that every group of the model is a row of the network, the constraint is always automatically satisfied. Then, the energy function of the network is reduced to (16) Fig. 4. Graphical representation of the values of the energy function on every iteration step in the n-queens problem for n = 10;20;50; and 100; respectively. The dashed line represents the evolution of the energy applying the updating rule and the maximum neuron model of Takefuji, Lee et al., and the solid line the evolution of the energy in the OCHOM, considering in both cases the same initial state for the n-parallel mode. where is the adjacency matrix. In this problem the minimum number of edges must be removed from the graph such that the remaining graph is a bipartite graph. Observe that the vertices are partitioned into two disjoint sets and no edge exists between two vertices in the same set. The problem can be where. Observe that this model can be easily extended for the -partite subgraph problem, in which the minimum number of edges must be removed from the graph such that the remaining graph is a -partite graph. In this case the problem is solved using a neural network. By comparing the energy function (16) defined for the problem and the Hopfield energy function, the connections weights and the biases are derived. If we substitute them in the Hopfield s updating rule we get. Observe that, since for every two neurons and in the same group (row),we have. The following procedure describes the proposed general algorithm for the -partite subgraph problem based on the OCHOM. Observe that this is a -parallel algorithm, since neurons are updated simultaneously on every step. 1) Set the initial state of the competitive Hopfield model by randomly setting the output of one neuron in each group (row) to be one and all the other neurons in the group (row) to be zero. 2) Evaluate the initial value of the energy function (16). 3) Select a group (row). 4) Compute the inputs of the neurons in the group (row),,by, for. 5) Select the activated neuron in the group (row) and select the neuron with the maximum input per group (row). 6) If, then, and ; else since no updating is made. 7) Repeat from Step 3) until reaches an equilibrium value.

9 GALÁN-MARÍN AND MUÑOZ-PÉREZ: DESIGN AND ANALYSIS OF MAXIMUM HOPFIELD NETWORKS 337 We have tested our neural network for the bipartite subgraph problem, that is, on an Origin 2000 Computer (Silicon Graphics Inc.). For comparison, the maximum neural network proposed by Lee et al. [9] for this problem has also been implemented. A total of 12 graph sizes from 10 to 2000 with six edge densities from 5 to 90% were simulated, where the density of an -vertex graph is defined by the ratio between the number of given edges and. In each size, 100 simulation runs with different graphs were performed. We consider that the networks are in a stable state if the energy remains unchanged during 200 network updates. Simulation results confirm that the energy descent is guaranteed in this problem when we apply the maximum network with sequential updating of the groups of two neurons, as we have shown in Section III. However, the network is usually trapped in local minima and does not provide good-quality solutions. For that reason, Lee et al. propose for the bipartite subgraph problem a parallel updating of the groups of neurons. On the other hand, our simulation results show that although good-quality solutions can be obtained for the medium and small sizes, the energy decrease is not always guaranteed by applying in this problem the parallel maximum network. Fig. 5 shows in various sizes the typical transition pattern of the number of edges embedded in a solution by every algorithm. The dashed line represents the evolution in the parallel maximum network and the solid line in the OCHOM, where one step means that all the rows are considered for updating. It is observed in the parallel maximum network that the energy descent, that is, the increment of the number of embedded edges, is usually guaranteed only for the small sizes. Observe in the examples of and that, although both networks obtain the same solution quality, the OCHOM requires a fewer number of steps. For the medium sizes, as shown in the examples with and, it is observed that although the parallel maximum network still converges to an acceptable solution, the energy decrease is not guaranteed. In this way, this network degrades the performance and then for large-scale networks with we have found graph problems where the parallel maximum network does not provide any solution. In these cases the energy is increased until it remains unchanged, as shown in the example with in Fig. 5. However, the convergence rate of the OCHOM to an acceptable solution is 100% for all the sizes, and even for large-scale networks it always finds rapidly an optimal solution guaranteeing the energy decrease. We present in Table II simulation results for the sizes. For bigger sizes both neural algorithms are not comparable since the parallel maximum network many times does not provide any solution and requires long computation times. Table II shows the average number of remaining edges in a solution, the average number of steps for convergence and the percentage of superiority for both models in every size, where edges densities of 25 and 50% are considered. The percentage of superiority is computed as the number of times that one algorithm generates a graph with more embedded edges than the other. Note that an iteration step is considered when all the rows of the neural network have been updated. Simulation results with all the six graph densities show that the OCHOM is superior to the parallel Fig. 5. Graphical representation of the number of embedded edges on every iteration step in the bipartite subgraph problem for m = 20; 200;400; and 1000; respectively, where the 25% density graph is considered. The dashed line represents the evolution of the number of embedded edges applying the parallel maximum neural network, and the solid line the evolution applying the OCHOM, considering in both cases the same initial state. maximum neural network proposed by Lee et al. in terms of the solution quality, because the average number of embedded edges in the solution is always bigger in all the sizes. Experimental results also show that this superiority is accentuated in

10 338 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 2, MARCH 2001 TABLE II SIMULATION RESULTS the sizes and when the density of the graph is increased. Table I shows that the OCHOM is also much superior in terms of the computation time on an usual sequential machine, since an updating of all the rows of the OCHOM requires an equivalent number of operations than an iteration step of the parallel maximum neural network. However, implemented on a parallel machine with processors, the parallel maximum neural network for these sizes may require less computation time than the OCHOM, in which we consider sequential or asynchronous updating of the groups. V. CONCLUSION In this paper a binary CHOM is presented for combinatorial optimization. Through the energy reduction condition we show how to derive the dynamics of the model that maximize the descent of any Lyapunov energy function as the groups of neurons are updated. In this way we obtain the OCHOM and show that the maximum neuron model of Takefuji et al. is an instance of the OCHOM for neural networks with no self-connections and no interconnections between neurons belonging to the same group. The updating rule presented by Takefuji et al. for the maximum neuron model is also discussed, showing that it can only guarantee the descent of some special energy functions. The effectiveness of the OCHOM for combinatorial optimization is demonstrated through the -queens problem and the bipartite subgraph problem. For the -queens problem, a neural network with the maximum neuron model and updating rule of Takefuji et al. has been presented by Funabiki et al., where two heuristics were added to avoid the local minima convergence of the network. However, simulation results show that the OCHOM provides the capability for escaping from local minima in this problem without the help of heuristic methods, since the convergence rate is 100% excepting for the very small sizes. In addition to this, our neural network can find solutions much faster than the one of Funabiki et al., because the required number of iteration steps is significantly reduced. For the bipartite subgraph problem, the best known neural algorithm based on the parallel maximum neural network has also been implemented. Performance comparison through massive simulation runs has shown that the neural algorithm based on the OCHOM rapidly converges to good-quality solutions in all the sizes. However, the algorithm based on the parallel maximum network does not provide any solution for some graph problems with the larger sizes, since the energy descent is not guaranteed. Moreover, our COHM always finds better solutions in all the sizes, where the superiority is accentuated in the larger sizes and when the density of the graph is increased. In addition to this, the computation time on an usual sequential machine is reduced since the number of iteration steps for convergence is much smaller. The algorithm can be easily extended for the -partite subgraph problem, where neurons in the same group are updated synchronously. In this -parallel mode the neurons can be implemented in processors, taking advantage of the parallelism. We have shown in the -queens problem and in the bipartite subgraph problem that the OCHOM does not degrade the performance in very large size instances, unlike the binary sequential Hopfield model that we presented in [15]. However, this sequential model is more universal, since the OCHOM is useful only for combinatorial optimization problems in which one and only one neuron must be on in each column or row of the network. Nevertheless, many practical optimization problems can be reduced to this neural representation, as Takefuji and coworkers have shown through a large number of powerful applications [6] [12]. We conclude that the OCHOM for the -queens problem and for the bipartite subgraph problem is superior to the existing neural networks for these problems. In future we plan to show the efficiency of the COHM in solving more combinatorial optimization problems. REFERENCES [1] J. J. Hopfield and D. W. Tank, Neural computation of decisions in optimization problems, Biol. Cybern., vol. 52, pp , [2] J. J. Hopfield, Neural networks and physical systems with emergent collective computational abilities, in Proc. Nat. Academy Sci. USA, vol. 79, 1982, pp [3] Y. Takefuji, K. C. Lee, and H. Aiso, An artificial maximum neural network: A winner-take-all neuron model forcing the state of the system in a solution domain, Biol. Cybern., vol. 67, pp , [4] Y. Takefuji, L. L. Chen, K. C. Lee, and J. Huffman, Parallel algorithms for finding a near-maximum independent set of a circle graph, IEEE Trans. Neural Networks, vol. 1, pp , May [5] N. Funabiki and S. Nishikawa, A binary hopfield neural-network approach for satellite broadcast scheduling problems, IEEE Trans. Neural Networks, vol. 8, pp , Mar

11 GALÁN-MARÍN AND MUÑOZ-PÉREZ: DESIGN AND ANALYSIS OF MAXIMUM HOPFIELD NETWORKS 339 [6] Y. Takefuji and K. C. Lee, A parallel algorithm for tiling problems, IEEE Trans. Neural Networks, vol. 1, pp , [7] N. Funabiki and J. Kitamichi, A gradual neural-network algorithm for jointly time-slot/code assignment problems in packet radio networks, IEEE Trans. Neural Networks, vol. 9, pp , Nov [8] Y. Takefuji and K. C. Lee, Artificial neural networks for four-coloring map problems and K-colorability problems, IEEE Trans. Circuits Syst., vol. 38, pp , [9] K. C. Lee, N. Funabiki, and Y. Takefuji, A parallel improvement algorithm for the bipartite subgraph problem, IEEE Trans. Neural Networks, vol. 3, pp , Mar [10] K. C. Lee and Y. Takefuji, A generalized maximum neural network for the module orientation problem, Int. J. Electron., vol. 72, no. 3, pp , [11] Y. Takefuji, Neural Network Parallel Computing. Boston, MA: Kluwer, [12] N. Funabiki, Y. Takenaka, and S. Nishikawa, A maximum neural network approach for N -queens problem, Biol. Cybern., vol. 76, pp , [13] M. Ohta, A. Ogihara, and K. Fukunaga, Binary neural network with self-feedback and its application to N -queens problem, IEICE Trans. Inform. Syst., vol. E77-D, no. 4, pp , [14] J. Mandziuk, Solving the N -queens problem with a binary Hopfield-type network, Biol. Cybern., vol. 72, pp , [15] G. Galán-Marín and J. Muñoz-Pérez, A new input output function for binary Hopfield neural networks, in Foundations and Tools for Neural Modeling, ser. Lecture Notes in Computer Science. Berlin, Germany: Springer-Verlag, 1999, vol. 1606, pp [16] M. Tateishi and S. Tamura, Comments on Artificial neural networks for four-coloring map problems and K-colorability problems, IEEE Trans. Circuits Syst. I, vol. 41, pp , [17] L. Wang, Discrete-time convergence theory and updating rules for neural networks with energy functions, IEEE Trans. Neural Networks, vol. 8, pp , [18] W. McCulloch and W. Pitts, A logical calculus of the ideas immanent in nervous activity, Bull. Math. Biophys., vol. 5, pp , [19] P. C. Chung, C. T. Tsai, E. L. Chen, and Y. N. Sun, Polygonal approximation using a competitive Hopfield neural network, Pattern Recognition, vol. 27, no. 11, pp , [20] M. R. Garey and D. S. Johnson, Computers and Intractability: A Guide to the Theory of NP-Completeness. New York: Freeman, [21] K. Smith, M. Palaniswami, and M. Krishnamoorthy, Neural techniques for combinatorial optimization with applications, IEEE Trans. Neural Networks, vol. 9, pp , Nov [22] M. Glesner and W. Pöchmüller, NeuroComputers An Overview of Neural Networks in VLSI. London, U.K.: Chapman and Hall, [23] S. Shams and J. L. Gaudiot, Implementing regularly structured neural networks on the DREAM machine, IEEE Trans. Neural Networks, vol. 6, pp , [24] N. H. Farhat, D. Psaltis, A. Prata, and E. Peak, Optical implementation of the Hopfield model, Appl. Opt., vol. 24, pp , Gloria Galán-Marín was born in Badajoz, Spain, in She received the B.S. degree in industrial engineering from the University of Sevilla, Spain, in 1995, and the Ph.D. degree in industrial engineering from the University of Málaga, Spain, in In 1995, she joined the University of Málaga where she is an Associate Professor at the Department of Applied Mathematics. She is currently on leave from there to the University of Extremadura, Spain. Her research interests include neural-net computing, combinatorial optimization, and applications of new techniques to industry and business problems. José Muñoz-Pérez was born in Cazorla, Spain, in He received the B.S. degree in mathematics from the University of Granada, Spain, in 1974, and the Ph.D. degree in mathematics from the University of Sevilla, Spain, in From 1976 to 1989, he was an Associate Professor at the University of Sevilla and since 1989, he is a Professor at the University of Malaga in the Department of Languages and Computation Sciences in the area of computing and artificial intelligence. He is also the Director of the Image Processing Institute in the Technological Park of Andalucia. His current research interests include neural networks, pattern recognition, and image processing. Dr. Muñoz-Pérez is a member of the Spanish Association for Artificial Intelligence.

HOPFIELD neural networks (HNNs) are a class of nonlinear

HOPFIELD neural networks (HNNs) are a class of nonlinear IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: EXPRESS BRIEFS, VOL. 52, NO. 4, APRIL 2005 213 Stochastic Noise Process Enhancement of Hopfield Neural Networks Vladimir Pavlović, Member, IEEE, Dan Schonfeld,

More information

The Hypercube Graph and the Inhibitory Hypercube Network

The Hypercube Graph and the Inhibitory Hypercube Network The Hypercube Graph and the Inhibitory Hypercube Network Michael Cook mcook@forstmannleff.com William J. Wolfe Professor of Computer Science California State University Channel Islands william.wolfe@csuci.edu

More information

CS 781 Lecture 9 March 10, 2011 Topics: Local Search and Optimization Metropolis Algorithm Greedy Optimization Hopfield Networks Max Cut Problem Nash

CS 781 Lecture 9 March 10, 2011 Topics: Local Search and Optimization Metropolis Algorithm Greedy Optimization Hopfield Networks Max Cut Problem Nash CS 781 Lecture 9 March 10, 2011 Topics: Local Search and Optimization Metropolis Algorithm Greedy Optimization Hopfield Networks Max Cut Problem Nash Equilibrium Price of Stability Coping With NP-Hardness

More information

Solving Subset Sum Problems by Time-free Spiking Neural P Systems

Solving Subset Sum Problems by Time-free Spiking Neural P Systems Appl. Math. Inf. Sci. 8, No. 1, 327-332 (2014) 327 Applied Mathematics & Information Sciences An International Journal http://dx.doi.org/10.12785/amis/080140 Solving Subset Sum Problems by Time-free Spiking

More information

IN THIS PAPER, we consider a class of continuous-time recurrent

IN THIS PAPER, we consider a class of continuous-time recurrent IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: EXPRESS BRIEFS, VOL. 51, NO. 4, APRIL 2004 161 Global Output Convergence of a Class of Continuous-Time Recurrent Neural Networks With Time-Varying Thresholds

More information

CSC 4510 Machine Learning

CSC 4510 Machine Learning 10: Gene(c Algorithms CSC 4510 Machine Learning Dr. Mary Angela Papalaskari Department of CompuBng Sciences Villanova University Course website: www.csc.villanova.edu/~map/4510/ Slides of this presenta(on

More information

NP-Completeness. f(n) \ n n sec sec sec. n sec 24.3 sec 5.2 mins. 2 n sec 17.9 mins 35.

NP-Completeness. f(n) \ n n sec sec sec. n sec 24.3 sec 5.2 mins. 2 n sec 17.9 mins 35. NP-Completeness Reference: Computers and Intractability: A Guide to the Theory of NP-Completeness by Garey and Johnson, W.H. Freeman and Company, 1979. NP-Completeness 1 General Problems, Input Size and

More information

The NP-Hardness of the Connected p-median Problem on Bipartite Graphs and Split Graphs

The NP-Hardness of the Connected p-median Problem on Bipartite Graphs and Split Graphs Chiang Mai J. Sci. 2013; 40(1) 8 3 Chiang Mai J. Sci. 2013; 40(1) : 83-88 http://it.science.cmu.ac.th/ejournal/ Contributed Paper The NP-Hardness of the Connected p-median Problem on Bipartite Graphs and

More information

Finding optimal configurations ( combinatorial optimization)

Finding optimal configurations ( combinatorial optimization) CS 1571 Introduction to AI Lecture 10 Finding optimal configurations ( combinatorial optimization) Milos Hauskrecht milos@cs.pitt.edu 539 Sennott Square Constraint satisfaction problem (CSP) Constraint

More information

A Complete Stability Analysis of Planar Discrete-Time Linear Systems Under Saturation

A Complete Stability Analysis of Planar Discrete-Time Linear Systems Under Saturation 710 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I: FUNDAMENTAL THEORY AND APPLICATIONS, VOL 48, NO 6, JUNE 2001 A Complete Stability Analysis of Planar Discrete-Time Linear Systems Under Saturation Tingshu

More information

Computational complexity theory

Computational complexity theory Computational complexity theory Introduction to computational complexity theory Complexity (computability) theory deals with two aspects: Algorithm s complexity. Problem s complexity. References S. Cook,

More information

CMOS Ising Computer to Help Optimize Social Infrastructure Systems

CMOS Ising Computer to Help Optimize Social Infrastructure Systems FEATURED ARTICLES Taking on Future Social Issues through Open Innovation Information Science for Greater Industrial Efficiency CMOS Ising Computer to Help Optimize Social Infrastructure Systems As the

More information

Computational Intelligence Lecture 6: Associative Memory

Computational Intelligence Lecture 6: Associative Memory Computational Intelligence Lecture 6: Associative Memory Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Fall 2011 Farzaneh Abdollahi Computational Intelligence

More information

Optimization of Quadratic Forms: NP Hard Problems : Neural Networks

Optimization of Quadratic Forms: NP Hard Problems : Neural Networks 1 Optimization of Quadratic Forms: NP Hard Problems : Neural Networks Garimella Rama Murthy, Associate Professor, International Institute of Information Technology, Gachibowli, HYDERABAD, AP, INDIA ABSTRACT

More information

NP-Completeness. NP-Completeness 1

NP-Completeness. NP-Completeness 1 NP-Completeness Reference: Computers and Intractability: A Guide to the Theory of NP-Completeness by Garey and Johnson, W.H. Freeman and Company, 1979. NP-Completeness 1 General Problems, Input Size and

More information

Solving TSP Using Lotka-Volterra Neural Networks without Self-Excitatory

Solving TSP Using Lotka-Volterra Neural Networks without Self-Excitatory Solving TSP Using Lotka-Volterra Neural Networks without Self-Excitatory Manli Li, Jiali Yu, Stones Lei Zhang, Hong Qu Computational Intelligence Laboratory, School of Computer Science and Engineering,

More information

A Self-Stabilizing Algorithm for Finding a Minimal Distance-2 Dominating Set in Distributed Systems

A Self-Stabilizing Algorithm for Finding a Minimal Distance-2 Dominating Set in Distributed Systems JOURNAL OF INFORMATION SCIENCE AND ENGINEERING 24, 1709-1718 (2008) A Self-Stabilizing Algorithm for Finding a Minimal Distance-2 Dominating Set in Distributed Systems JI-CHERNG LIN, TETZ C. HUANG, CHENG-PIN

More information

Hertz, Krogh, Palmer: Introduction to the Theory of Neural Computation. Addison-Wesley Publishing Company (1991). (v ji (1 x i ) + (1 v ji )x i )

Hertz, Krogh, Palmer: Introduction to the Theory of Neural Computation. Addison-Wesley Publishing Company (1991). (v ji (1 x i ) + (1 v ji )x i ) Symmetric Networks Hertz, Krogh, Palmer: Introduction to the Theory of Neural Computation. Addison-Wesley Publishing Company (1991). How can we model an associative memory? Let M = {v 1,..., v m } be a

More information

AI Programming CS F-20 Neural Networks

AI Programming CS F-20 Neural Networks AI Programming CS662-2008F-20 Neural Networks David Galles Department of Computer Science University of San Francisco 20-0: Symbolic AI Most of this class has been focused on Symbolic AI Focus or symbols

More information

Design and Stability Analysis of Single-Input Fuzzy Logic Controller

Design and Stability Analysis of Single-Input Fuzzy Logic Controller IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS PART B: CYBERNETICS, VOL. 30, NO. 2, APRIL 2000 303 Design and Stability Analysis of Single-Input Fuzzy Logic Controller Byung-Jae Choi, Seong-Woo Kwak,

More information

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD WHAT IS A NEURAL NETWORK? The simplest definition of a neural network, more properly referred to as an 'artificial' neural network (ANN), is provided

More information

COMPUTATIONAL COMPLEXITY

COMPUTATIONAL COMPLEXITY ATHEATICS: CONCEPTS, AND FOUNDATIONS Vol. III - Computational Complexity - Osamu Watanabe COPUTATIONAL COPLEXITY Osamu Watanabe Tokyo Institute of Technology, Tokyo, Japan Keywords: {deterministic, randomized,

More information

Lecture 9 Evolutionary Computation: Genetic algorithms

Lecture 9 Evolutionary Computation: Genetic algorithms Lecture 9 Evolutionary Computation: Genetic algorithms Introduction, or can evolution be intelligent? Simulation of natural evolution Genetic algorithms Case study: maintenance scheduling with genetic

More information

Featured Articles Advanced Research into AI Ising Computer

Featured Articles Advanced Research into AI Ising Computer 156 Hitachi Review Vol. 65 (2016), No. 6 Featured Articles Advanced Research into AI Ising Computer Masanao Yamaoka, Ph.D. Chihiro Yoshimura Masato Hayashi Takuya Okuyama Hidetaka Aoki Hiroyuki Mizuno,

More information

Using a Hopfield Network: A Nuts and Bolts Approach

Using a Hopfield Network: A Nuts and Bolts Approach Using a Hopfield Network: A Nuts and Bolts Approach November 4, 2013 Gershon Wolfe, Ph.D. Hopfield Model as Applied to Classification Hopfield network Training the network Updating nodes Sequencing of

More information

Biological cyt me 9 Springer-Verlag 1992

Biological cyt me 9 Springer-Verlag 1992 Biol. Cybern. 67, 243-251 (1992) Biological cyt me 9 Springer-Verlag 1992 An artificial maximum neural network: a winner-take-all neuron model forcing the state of the system in a solution domain Yoshiyasu

More information

THE frequency sensitive competitive learning (FSCL) is

THE frequency sensitive competitive learning (FSCL) is 1026 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 8, NO. 5, SEPTEMBER 1997 Diffusion Approximation of Frequency Sensitive Competitive Learning Aristides S. Galanopoulos, Member, IEEE, Rolph L. Moses, Senior

More information

Modified Hopfield Neural Network Approach for Solving Nonlinear Algebraic Equations

Modified Hopfield Neural Network Approach for Solving Nonlinear Algebraic Equations Engineering Letters, 14:1, EL_14_1_3 (Advance online publication: 1 February 007) Modified Hopfield Neural Network Approach for Solving Nonlinear Algebraic Equations {Deepak Mishra, Prem K. Kalra} Abstract

More information

Artificial Intelligence Hopfield Networks

Artificial Intelligence Hopfield Networks Artificial Intelligence Hopfield Networks Andrea Torsello Network Topologies Single Layer Recurrent Network Bidirectional Symmetric Connection Binary / Continuous Units Associative Memory Optimization

More information

One-Hour-Ahead Load Forecasting Using Neural Network

One-Hour-Ahead Load Forecasting Using Neural Network IEEE TRANSACTIONS ON POWER SYSTEMS, VOL. 17, NO. 1, FEBRUARY 2002 113 One-Hour-Ahead Load Forecasting Using Neural Network Tomonobu Senjyu, Member, IEEE, Hitoshi Takara, Katsumi Uezato, and Toshihisa Funabashi,

More information

PULSE-COUPLED networks (PCNs) of integrate-and-fire

PULSE-COUPLED networks (PCNs) of integrate-and-fire 1018 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 15, NO. 5, SEPTEMBER 2004 Grouping Synchronization in a Pulse-Coupled Network of Chaotic Spiking Oscillators Hidehiro Nakano, Student Member, IEEE, and Toshimichi

More information

Computational complexity theory

Computational complexity theory Computational complexity theory Introduction to computational complexity theory Complexity (computability) theory deals with two aspects: Algorithm s complexity. Problem s complexity. References S. Cook,

More information

Variable Objective Search

Variable Objective Search Variable Objective Search Sergiy Butenko, Oleksandra Yezerska, and Balabhaskar Balasundaram Abstract This paper introduces the variable objective search framework for combinatorial optimization. The method

More information

HIGH-PERFORMANCE circuits consume a considerable

HIGH-PERFORMANCE circuits consume a considerable 1166 IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL 17, NO 11, NOVEMBER 1998 A Matrix Synthesis Approach to Thermal Placement Chris C N Chu D F Wong Abstract In this

More information

12. LOCAL SEARCH. gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria

12. LOCAL SEARCH. gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria 12. LOCAL SEARCH gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria Lecture slides by Kevin Wayne Copyright 2005 Pearson-Addison Wesley h ttp://www.cs.princeton.edu/~wayne/kleinberg-tardos

More information

A Structural Matching Algorithm Using Generalized Deterministic Annealing

A Structural Matching Algorithm Using Generalized Deterministic Annealing A Structural Matching Algorithm Using Generalized Deterministic Annealing Laura Davlea Institute for Research in Electronics Lascar Catargi 22 Iasi 6600, Romania )QEMPPHEZPIE$QEMPHRXMWVS Abstract: We present

More information

Active Sonar Target Classification Using Classifier Ensembles

Active Sonar Target Classification Using Classifier Ensembles International Journal of Engineering Research and Technology. ISSN 0974-3154 Volume 11, Number 12 (2018), pp. 2125-2133 International Research Publication House http://www.irphouse.com Active Sonar Target

More information

Integer weight training by differential evolution algorithms

Integer weight training by differential evolution algorithms Integer weight training by differential evolution algorithms V.P. Plagianakos, D.G. Sotiropoulos, and M.N. Vrahatis University of Patras, Department of Mathematics, GR-265 00, Patras, Greece. e-mail: vpp

More information

AUTOMATED REASONING. Agostino Dovier. Udine, October 2, Università di Udine CLPLAB

AUTOMATED REASONING. Agostino Dovier. Udine, October 2, Università di Udine CLPLAB AUTOMATED REASONING Agostino Dovier Università di Udine CLPLAB Udine, October 2, 2017 AGOSTINO DOVIER (CLPLAB) AUTOMATED REASONING UDINE, OCTOBER 2, 2017 1 / 28 COURSE PLACEMENT International Master Degree

More information

Hopfield Neural Networks for Parametric Identification of Dynamical Systems

Hopfield Neural Networks for Parametric Identification of Dynamical Systems Neural Processing Letters (2005) 21:143 152 Springer 2005 DOI 10.1007/s11063-004-3424-3 Hopfield Neural Networks for Parametric Identification of Dynamical Systems MIGUEL ATENCIA 1, GONZALO JOYA 2 and

More information

CS 301: Complexity of Algorithms (Term I 2008) Alex Tiskin Harald Räcke. Hamiltonian Cycle. 8.5 Sequencing Problems. Directed Hamiltonian Cycle

CS 301: Complexity of Algorithms (Term I 2008) Alex Tiskin Harald Räcke. Hamiltonian Cycle. 8.5 Sequencing Problems. Directed Hamiltonian Cycle 8.5 Sequencing Problems Basic genres. Packing problems: SET-PACKING, INDEPENDENT SET. Covering problems: SET-COVER, VERTEX-COVER. Constraint satisfaction problems: SAT, 3-SAT. Sequencing problems: HAMILTONIAN-CYCLE,

More information

Synchronous vs asynchronous behavior of Hopfield's CAM neural net

Synchronous vs asynchronous behavior of Hopfield's CAM neural net K.F. Cheung, L.E. Atlas and R.J. Marks II, "Synchronous versus asynchronous behavior of Hopfield's content addressable memory", Applied Optics, vol. 26, pp.4808-4813 (1987). Synchronous vs asynchronous

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE 4: Linear Systems Summary # 3: Introduction to artificial neural networks DISTRIBUTED REPRESENTATION An ANN consists of simple processing units communicating with each other. The basic elements of

More information

The Complexity of Maximum. Matroid-Greedoid Intersection and. Weighted Greedoid Maximization

The Complexity of Maximum. Matroid-Greedoid Intersection and. Weighted Greedoid Maximization Department of Computer Science Series of Publications C Report C-2004-2 The Complexity of Maximum Matroid-Greedoid Intersection and Weighted Greedoid Maximization Taneli Mielikäinen Esko Ukkonen University

More information

CMSC 421: Neural Computation. Applications of Neural Networks

CMSC 421: Neural Computation. Applications of Neural Networks CMSC 42: Neural Computation definition synonyms neural networks artificial neural networks neural modeling connectionist models parallel distributed processing AI perspective Applications of Neural Networks

More information

Neural Networks for Machine Learning. Lecture 11a Hopfield Nets

Neural Networks for Machine Learning. Lecture 11a Hopfield Nets Neural Networks for Machine Learning Lecture 11a Hopfield Nets Geoffrey Hinton Nitish Srivastava, Kevin Swersky Tijmen Tieleman Abdel-rahman Mohamed Hopfield Nets A Hopfield net is composed of binary threshold

More information

Introduction to Artificial Neural Networks

Introduction to Artificial Neural Networks Facultés Universitaires Notre-Dame de la Paix 27 March 2007 Outline 1 Introduction 2 Fundamentals Biological neuron Artificial neuron Artificial Neural Network Outline 3 Single-layer ANN Perceptron Adaline

More information

Using Variable Threshold to Increase Capacity in a Feedback Neural Network

Using Variable Threshold to Increase Capacity in a Feedback Neural Network Using Variable Threshold to Increase Capacity in a Feedback Neural Network Praveen Kuruvada Abstract: The article presents new results on the use of variable thresholds to increase the capacity of a feedback

More information

INTRODUCTION TO ARTIFICIAL INTELLIGENCE

INTRODUCTION TO ARTIFICIAL INTELLIGENCE v=1 v= 1 v= 1 v= 1 v= 1 v=1 optima 2) 3) 5) 6) 7) 8) 9) 12) 11) 13) INTRDUCTIN T ARTIFICIAL INTELLIGENCE DATA15001 EPISDE 8: NEURAL NETWRKS TDAY S MENU 1. NEURAL CMPUTATIN 2. FEEDFRWARD NETWRKS (PERCEPTRN)

More information

Distributed Optimization. Song Chong EE, KAIST

Distributed Optimization. Song Chong EE, KAIST Distributed Optimization Song Chong EE, KAIST songchong@kaist.edu Dynamic Programming for Path Planning A path-planning problem consists of a weighted directed graph with a set of n nodes N, directed links

More information

Comparison of Simulation Algorithms for the Hopfield Neural Network: An Application of Economic Dispatch

Comparison of Simulation Algorithms for the Hopfield Neural Network: An Application of Economic Dispatch Turk J Elec Engin, VOL.8, NO.1 2000, c TÜBİTAK Comparison of Simulation Algorithms for the Hopfield Neural Network: An Application of Economic Dispatch Tankut Yalçınöz and Halis Altun Department of Electrical

More information

On Detecting Multiple Faults in Baseline Interconnection Networks

On Detecting Multiple Faults in Baseline Interconnection Networks On Detecting Multiple Faults in Baseline Interconnection Networks SHUN-SHII LIN 1 AND SHAN-TAI CHEN 2 1 National Taiwan Normal University, Taipei, Taiwan, ROC 2 Chung Cheng Institute of Technology, Tao-Yuan,

More information

Computing Consecutive-Type Reliabilities Non-Recursively

Computing Consecutive-Type Reliabilities Non-Recursively IEEE TRANSACTIONS ON RELIABILITY, VOL. 52, NO. 3, SEPTEMBER 2003 367 Computing Consecutive-Type Reliabilities Non-Recursively Galit Shmueli Abstract The reliability of consecutive-type systems has been

More information

an efficient procedure for the decision problem. We illustrate this phenomenon for the Satisfiability problem.

an efficient procedure for the decision problem. We illustrate this phenomenon for the Satisfiability problem. 1 More on NP In this set of lecture notes, we examine the class NP in more detail. We give a characterization of NP which justifies the guess and verify paradigm, and study the complexity of solving search

More information

Artificial Neural Network and Fuzzy Logic

Artificial Neural Network and Fuzzy Logic Artificial Neural Network and Fuzzy Logic 1 Syllabus 2 Syllabus 3 Books 1. Artificial Neural Networks by B. Yagnanarayan, PHI - (Cover Topologies part of unit 1 and All part of Unit 2) 2. Neural Networks

More information

Unit 8: Introduction to neural networks. Perceptrons

Unit 8: Introduction to neural networks. Perceptrons Unit 8: Introduction to neural networks. Perceptrons D. Balbontín Noval F. J. Martín Mateos J. L. Ruiz Reina A. Riscos Núñez Departamento de Ciencias de la Computación e Inteligencia Artificial Universidad

More information

PROBLEM SOLVING AND SEARCH IN ARTIFICIAL INTELLIGENCE

PROBLEM SOLVING AND SEARCH IN ARTIFICIAL INTELLIGENCE Artificial Intelligence, Computational Logic PROBLEM SOLVING AND SEARCH IN ARTIFICIAL INTELLIGENCE Lecture 4 Metaheuristic Algorithms Sarah Gaggl Dresden, 5th May 2017 Agenda 1 Introduction 2 Constraint

More information

Neural Networks and Fuzzy Logic Rajendra Dept.of CSE ASCET

Neural Networks and Fuzzy Logic Rajendra Dept.of CSE ASCET Unit-. Definition Neural network is a massively parallel distributed processing system, made of highly inter-connected neural computing elements that have the ability to learn and thereby acquire knowledge

More information

Nature-inspired Analog Computing on Silicon

Nature-inspired Analog Computing on Silicon Nature-inspired Analog Computing on Silicon Tetsuya ASAI and Yoshihito AMEMIYA Division of Electronics and Information Engineering Hokkaido University Abstract We propose CMOS analog circuits that emulate

More information

The Power of Extra Analog Neuron. Institute of Computer Science Academy of Sciences of the Czech Republic

The Power of Extra Analog Neuron. Institute of Computer Science Academy of Sciences of the Czech Republic The Power of Extra Analog Neuron Jiří Šíma Institute of Computer Science Academy of Sciences of the Czech Republic (Artificial) Neural Networks (NNs) 1. mathematical models of biological neural networks

More information

THE information capacity is one of the most important

THE information capacity is one of the most important 256 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 44, NO. 1, JANUARY 1998 Capacity of Two-Layer Feedforward Neural Networks with Binary Weights Chuanyi Ji, Member, IEEE, Demetri Psaltis, Senior Member,

More information

Sorting Network Development Using Cellular Automata

Sorting Network Development Using Cellular Automata Sorting Network Development Using Cellular Automata Michal Bidlo, Zdenek Vasicek, and Karel Slany Brno University of Technology, Faculty of Information Technology Božetěchova 2, 61266 Brno, Czech republic

More information

Electric Load Forecasting Using Wavelet Transform and Extreme Learning Machine

Electric Load Forecasting Using Wavelet Transform and Extreme Learning Machine Electric Load Forecasting Using Wavelet Transform and Extreme Learning Machine Song Li 1, Peng Wang 1 and Lalit Goel 1 1 School of Electrical and Electronic Engineering Nanyang Technological University

More information

Neural Networks and the Back-propagation Algorithm

Neural Networks and the Back-propagation Algorithm Neural Networks and the Back-propagation Algorithm Francisco S. Melo In these notes, we provide a brief overview of the main concepts concerning neural networks and the back-propagation algorithm. We closely

More information

Application of Artificial Neural Networks in Evaluation and Identification of Electrical Loss in Transformers According to the Energy Consumption

Application of Artificial Neural Networks in Evaluation and Identification of Electrical Loss in Transformers According to the Energy Consumption Application of Artificial Neural Networks in Evaluation and Identification of Electrical Loss in Transformers According to the Energy Consumption ANDRÉ NUNES DE SOUZA, JOSÉ ALFREDO C. ULSON, IVAN NUNES

More information

Hopfield Neural Network

Hopfield Neural Network Lecture 4 Hopfield Neural Network Hopfield Neural Network A Hopfield net is a form of recurrent artificial neural network invented by John Hopfield. Hopfield nets serve as content-addressable memory systems

More information

CLASSICAL error control codes have been designed

CLASSICAL error control codes have been designed IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 56, NO 3, MARCH 2010 979 Optimal, Systematic, q-ary Codes Correcting All Asymmetric and Symmetric Errors of Limited Magnitude Noha Elarief and Bella Bose, Fellow,

More information

Lecture 5: Logistic Regression. Neural Networks

Lecture 5: Logistic Regression. Neural Networks Lecture 5: Logistic Regression. Neural Networks Logistic regression Comparison with generative models Feed-forward neural networks Backpropagation Tricks for training neural networks COMP-652, Lecture

More information

Last updated: Oct 22, 2012 LINEAR CLASSIFIERS. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition

Last updated: Oct 22, 2012 LINEAR CLASSIFIERS. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition Last updated: Oct 22, 2012 LINEAR CLASSIFIERS Problems 2 Please do Problem 8.3 in the textbook. We will discuss this in class. Classification: Problem Statement 3 In regression, we are modeling the relationship

More information

ON THE NP-COMPLETENESS OF SOME GRAPH CLUSTER MEASURES

ON THE NP-COMPLETENESS OF SOME GRAPH CLUSTER MEASURES ON THE NP-COMPLETENESS OF SOME GRAPH CLUSTER MEASURES JIŘÍ ŠÍMA AND SATU ELISA SCHAEFFER Academy of Sciences of the Czech Republic Helsinki University of Technology, Finland elisa.schaeffer@tkk.fi SOFSEM

More information

THE PROBLEM of solving systems of linear inequalities

THE PROBLEM of solving systems of linear inequalities 452 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I: FUNDAMENTAL THEORY AND APPLICATIONS, VOL. 46, NO. 4, APRIL 1999 Recurrent Neural Networks for Solving Linear Inequalities Equations Youshen Xia, Jun Wang,

More information

AN ELECTRIC circuit containing a switch controlled by

AN ELECTRIC circuit containing a switch controlled by 878 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL. 46, NO. 7, JULY 1999 Bifurcation of Switched Nonlinear Dynamical Systems Takuji Kousaka, Member, IEEE, Tetsushi

More information

Upper Bounds on the Time and Space Complexity of Optimizing Additively Separable Functions

Upper Bounds on the Time and Space Complexity of Optimizing Additively Separable Functions Upper Bounds on the Time and Space Complexity of Optimizing Additively Separable Functions Matthew J. Streeter Computer Science Department and Center for the Neural Basis of Cognition Carnegie Mellon University

More information

Single processor scheduling with time restrictions

Single processor scheduling with time restrictions J Sched manuscript No. (will be inserted by the editor) Single processor scheduling with time restrictions O. Braun F. Chung R. Graham Received: date / Accepted: date Abstract We consider the following

More information

MOMENT functions are used in several computer vision

MOMENT functions are used in several computer vision IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 13, NO. 8, AUGUST 2004 1055 Some Computational Aspects of Discrete Orthonormal Moments R. Mukundan, Senior Member, IEEE Abstract Discrete orthogonal moments

More information

Boltzmann Machine and Hyperbolic Activation Function in Higher Order Network

Boltzmann Machine and Hyperbolic Activation Function in Higher Order Network Modern Applied Science; Vol. 8, No. 3; 2014 ISSN 1913-1844 E-ISSN 1913-1852 Published by Canadian Center of Science and Education Boltzmann Machine and Hyperbolic Activation Function in Higher Order Network

More information

Design of Non-Binary Quasi-Cyclic LDPC Codes by Absorbing Set Removal

Design of Non-Binary Quasi-Cyclic LDPC Codes by Absorbing Set Removal Design of Non-Binary Quasi-Cyclic LDPC Codes by Absorbing Set Removal Behzad Amiri Electrical Eng. Department University of California, Los Angeles Los Angeles, USA Email: amiri@ucla.edu Jorge Arturo Flores

More information

DEVS Simulation of Spiking Neural Networks

DEVS Simulation of Spiking Neural Networks DEVS Simulation of Spiking Neural Networks Rene Mayrhofer, Michael Affenzeller, Herbert Prähofer, Gerhard Höfer, Alexander Fried Institute of Systems Science Systems Theory and Information Technology Johannes

More information

P versus NP. Math 40210, Spring September 16, Math (Spring 2012) P versus NP September 16, / 9

P versus NP. Math 40210, Spring September 16, Math (Spring 2012) P versus NP September 16, / 9 P versus NP Math 40210, Spring 2012 September 16, 2012 Math 40210 (Spring 2012) P versus NP September 16, 2012 1 / 9 Properties of graphs A property of a graph is anything that can be described without

More information

Enforcing Passivity for Admittance Matrices Approximated by Rational Functions

Enforcing Passivity for Admittance Matrices Approximated by Rational Functions IEEE TRANSACTIONS ON POWER SYSTEMS, VOL. 16, NO. 1, FEBRUARY 2001 97 Enforcing Passivity for Admittance Matrices Approximated by Rational Functions Bjørn Gustavsen, Member, IEEE and Adam Semlyen, Life

More information

Undirected Graphical Models

Undirected Graphical Models Outline Hong Chang Institute of Computing Technology, Chinese Academy of Sciences Machine Learning Methods (Fall 2012) Outline Outline I 1 Introduction 2 Properties Properties 3 Generative vs. Conditional

More information

Theoretical Computer Science

Theoretical Computer Science Theoretical Computer Science 532 (2014) 64 72 Contents lists available at SciVerse ScienceDirect Theoretical Computer Science journal homepage: www.elsevier.com/locate/tcs Bandwidth consecutive multicolorings

More information

Improving Repair-based Constraint Satisfaction Methods by Value Propagation

Improving Repair-based Constraint Satisfaction Methods by Value Propagation From: AAAI-94 Proceedings. Copyright 1994, AAAI (www.aaai.org). All rights reserved. Improving Repair-based Constraint Satisfaction Methods by Value Propagation Nobuhiro Yugami Yuiko Ohta Mirotaka Nara

More information

8. INTRACTABILITY I. Lecture slides by Kevin Wayne Copyright 2005 Pearson-Addison Wesley. Last updated on 2/6/18 2:16 AM

8. INTRACTABILITY I. Lecture slides by Kevin Wayne Copyright 2005 Pearson-Addison Wesley. Last updated on 2/6/18 2:16 AM 8. INTRACTABILITY I poly-time reductions packing and covering problems constraint satisfaction problems sequencing problems partitioning problems graph coloring numerical problems Lecture slides by Kevin

More information

Local Search & Optimization

Local Search & Optimization Local Search & Optimization CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2017 Soleymani Artificial Intelligence: A Modern Approach, 3 rd Edition, Chapter 4 Outline

More information

APPLICATION OF RECURRENT NEURAL NETWORK USING MATLAB SIMULINK IN MEDICINE

APPLICATION OF RECURRENT NEURAL NETWORK USING MATLAB SIMULINK IN MEDICINE ITALIAN JOURNAL OF PURE AND APPLIED MATHEMATICS N. 39 2018 (23 30) 23 APPLICATION OF RECURRENT NEURAL NETWORK USING MATLAB SIMULINK IN MEDICINE Raja Das Madhu Sudan Reddy VIT Unversity Vellore, Tamil Nadu

More information

Cycles in the cycle prefix digraph

Cycles in the cycle prefix digraph Ars Combinatoria 60, pp. 171 180 (2001). Cycles in the cycle prefix digraph F. Comellas a, M. Mitjana b a Departament de Matemàtica Aplicada i Telemàtica, UPC Campus Nord, C3, 08034 Barcelona, Catalonia,

More information

Neighborly families of boxes and bipartite coverings

Neighborly families of boxes and bipartite coverings Neighborly families of boxes and bipartite coverings Noga Alon Dedicated to Professor Paul Erdős on the occasion of his 80 th birthday Abstract A bipartite covering of order k of the complete graph K n

More information

Multivalued functions in digital topology

Multivalued functions in digital topology Note di Matematica ISSN 1123-2536, e-issn 1590-0932 Note Mat. 37 (2017) no. 2, 61 76. doi:10.1285/i15900932v37n2p61 Multivalued functions in digital topology Laurence Boxer Department of Computer and Information

More information

Random walks and anisotropic interpolation on graphs. Filip Malmberg

Random walks and anisotropic interpolation on graphs. Filip Malmberg Random walks and anisotropic interpolation on graphs. Filip Malmberg Interpolation of missing data Assume that we have a graph where we have defined some (real) values for a subset of the nodes, and that

More information

Data Mining and Matrices

Data Mining and Matrices Data Mining and Matrices 08 Boolean Matrix Factorization Rainer Gemulla, Pauli Miettinen June 13, 2013 Outline 1 Warm-Up 2 What is BMF 3 BMF vs. other three-letter abbreviations 4 Binary matrices, tiles,

More information

Solving the N-Queens Puzzle with P Systems

Solving the N-Queens Puzzle with P Systems Solving the N-Queens Puzzle with P Systems Miguel A. Gutiérrez-Naranjo, Miguel A. Martínez-del-Amor, Ignacio Pérez-Hurtado, Mario J. Pérez-Jiménez Research Group on Natural Computing Department of Computer

More information

A FUZZY NEURAL NETWORK MODEL FOR FORECASTING STOCK PRICE

A FUZZY NEURAL NETWORK MODEL FOR FORECASTING STOCK PRICE A FUZZY NEURAL NETWORK MODEL FOR FORECASTING STOCK PRICE Li Sheng Institute of intelligent information engineering Zheiang University Hangzhou, 3007, P. R. China ABSTRACT In this paper, a neural network-driven

More information

A reinforcement learning scheme for a multi-agent card game with Monte Carlo state estimation

A reinforcement learning scheme for a multi-agent card game with Monte Carlo state estimation A reinforcement learning scheme for a multi-agent card game with Monte Carlo state estimation Hajime Fujita and Shin Ishii, Nara Institute of Science and Technology 8916 5 Takayama, Ikoma, 630 0192 JAPAN

More information

12. LOCAL SEARCH. gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria

12. LOCAL SEARCH. gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria Coping With NP-hardness Q. Suppose I need to solve an NP-hard problem. What should I do? A. Theory says you re unlikely to find poly-time algorithm. Must sacrifice one of three desired features. Solve

More information

ECE521 Lecture 7/8. Logistic Regression

ECE521 Lecture 7/8. Logistic Regression ECE521 Lecture 7/8 Logistic Regression Outline Logistic regression (Continue) A single neuron Learning neural networks Multi-class classification 2 Logistic regression The output of a logistic regression

More information

Stochastic Networks Variations of the Hopfield model

Stochastic Networks Variations of the Hopfield model 4 Stochastic Networks 4. Variations of the Hopfield model In the previous chapter we showed that Hopfield networks can be used to provide solutions to combinatorial problems that can be expressed as the

More information

Binary Decision Diagrams and Symbolic Model Checking

Binary Decision Diagrams and Symbolic Model Checking Binary Decision Diagrams and Symbolic Model Checking Randy Bryant Ed Clarke Ken McMillan Allen Emerson CMU CMU Cadence U Texas http://www.cs.cmu.edu/~bryant Binary Decision Diagrams Restricted Form of

More information

Single processor scheduling with time restrictions

Single processor scheduling with time restrictions Single processor scheduling with time restrictions Oliver Braun Fan Chung Ron Graham Abstract We consider the following scheduling problem 1. We are given a set S of jobs which are to be scheduled sequentially

More information

CS264: Beyond Worst-Case Analysis Lecture #18: Smoothed Complexity and Pseudopolynomial-Time Algorithms

CS264: Beyond Worst-Case Analysis Lecture #18: Smoothed Complexity and Pseudopolynomial-Time Algorithms CS264: Beyond Worst-Case Analysis Lecture #18: Smoothed Complexity and Pseudopolynomial-Time Algorithms Tim Roughgarden March 9, 2017 1 Preamble Our first lecture on smoothed analysis sought a better theoretical

More information