MICROCANONICAL OPTIMIZATION APPLIED TO THE TRAVELING SALESMAN PROBLEM

Size: px
Start display at page:

Download "MICROCANONICAL OPTIMIZATION APPLIED TO THE TRAVELING SALESMAN PROBLEM"

Transcription

1 International Journal of Modern Physics C, Vol. 9, No. 1 (1998) c World Scientific Publishing Company MICROCANONICAL OPTIMIZATION APPLIED TO THE TRAVELING SALESMAN PROBLEM ALEXANDRE LINHARES Computação Aplicada e Automação, UFF Niterói, RJ, Brazil linhares@nucleo.inpe.br JOSÉ R.A.TORREÃO Computação Aplicada e Automação, UFF Niterói, RJ, Brazil jrat@caa.uff.br Received 24 October 1997 Revised 17 December 1997 Optimization strategies based on simulated annealing and its variants have been extensively applied to the traveling salesman problem (TSP). Recently, there has appeared a new physics-based metaheuristic, called the microcanonical optimization algorithm (µo), which does not resort to annealing, and which has proven a superior alternative to the annealing procedures in various applications. Here we present the first performance evaluation of µo as applied to the TSP. When compared to three annealing strategies (simulated annealing, microcanonical annealing and Tsallis annealing), and to a tabu search algorithm, the microcanonical optimization has yielded the best overall results for several instances of the euclidean TSP. This confirms µo as a competitive approach for the solution of general combinatorial optimization problems. Keywords: Combinatorial Optimization; Microcanonical Ensemble; Simulated Annealing; Traveling Salesman Problem. 1. Introduction The traveling salesman problem (TSP) has been studied since the early days of scientific computation, and is now considered the benchmark in the field of combinatorial optimization. The problem can be easily stated: given a set of cities, the goal is to find a path of minimal cost, going through each city only once and returning to the starting point. In spite of its simple formulation, the TSP has been proven to be NP-Hard, meaning that there probably does not exist an algorithm which can exactly solve a general instance of the problem in plausible processing time. The best that can be expected is thus to find approximate strategies of solution, called heuristics. If a heuristic is a general-purpose procedure which can be applied to a variety of problems, it is referred to as a metaheuristic. 133

2 134 A. Linhares & J. R. A. Torreão Among the metaheuristics employed for the TSP, optimization algorithms derived from statistical physics have received a great deal of attention. 1 3 Simulated annealing, as introduced by Kirkpatrick et al., 4 was the first such algorithm, and many variants of it have appeared, such as fast simulated annealing, microcanonical annealing, 6 and Tsallis annealing. 3 Recently, a new strategy has been proposed which is also based on principles of statistical physics, but which does not resort to annealing. It is called the microcanonical optimization algorithm (µo), and has so far been employed, with remarkable success, in the context of visual processing, 7,8 and for task allocation in distributed systems. 9 Here, we present an analysis of µo when applied to the TSP, comparing it to some annealing-based procedures (simulated annealing, microcanonical annealing and Tsallis annealing), and also to a tabu search algorithm. 1 The results which we report show µo to be a very competitive metaheuristic in this domain: when considering both execution time and solution quality, it yielded the best performance of all the evaluated algorithms. In the following section, we describe the microcanonical optimization algorithm. Next, we discuss some implementation details of the alternative metaheuristics considered. In Sec. 4, we present and analyze the results obtained in our work, concluding with our final remarks in Sec.. 2. Microcanonical Optimization The microcanonical optimization algorithm consists of two procedures which are alternately applied: initialization and sampling. The initialization implements a local and optionally aggressive search of the solution space, in order to reach a local-minimum configuration. From there, the sampling phase proceeds, trying to free the solution from the local minimum, by taking it to another configuration of equivalent cost. One can picture the metaheuristic, once stuck in a local-minimum valley, as trying to evolve by going around the peaks in the solution space, instead of attempting to climb them, as in simulated annealing, for instance. This is done by resorting to the microcanonical simulation algorithm by Creutz, 11 which generates samples of fixed-energy configurations (see below). After the sampling phase, a new initialization is run and the algorithm thus proceeds, alternating between the two phases, until a stopping condition is reached. In what follows, we treat in greater detail the two phases of the microcanonical optimization. A pseudocode for the algorithm is given in Appendix A Initialization In the initialization, µo performs a local search, starting from an arbitrary solution and proposing moves which are accepted only when leading to configurations of lower cost (lower energy, in physical terms). Optionally, an aggressive implementation of this phase can be chosen, meaning that the algorithm will always pick the best candidate in a subset of possible moves.

3 Microcanonical Optimization Applied to the Traveling Salesman Problem 13 In a non-aggressive implementation, the only free parameter of the initialization phase defines its stopping condition: since it cannot be rigorously established when a local minimum has been reached, it is necessary to define a maximum number of rejected moves as the criterium for interrupting this phase. In the case of an aggressive implementation (which we chose), it is also necessary to define the number of candidate moves to be considered in each initialization step (, in our work). We also remark that, for the definition of the parameters to be employed in the sampling phase (see below), a list may be compiled, in the initialization, of those moves which have been rejected for leading to higher costs when compared to the current solution Sampling As already mentioned, in the sampling phase the µo metaheuristic tries to free itself from the local minimum reached in the initialization, at the same time trying to remain close, in terms of cost, to the best solution so far obtained. It implements, for this purpose, a version of the Creutz algorithm, assuming an extra degree of freedom, called the demon, which generates small perturbations on the current solution. At each sampling iteration, random moves are proposed which are accepted only if the demon is capable of yielding or absorbing the cost difference incurred. In µo, the demon is defined by two parameters: its capacity, D MAX,andits initial value, D I. The sampling generates a sequence of states whose energy is conserved, except for small fluctuations which are modeled by the demon. Calling E S the energy (cost) of the solution obtained in the initialization, and D and E the energy of the demon and of the solution, respectively, at a given instant in the sampling phase, we must have E + D = E S + D I = constant. Thus, in terms of the initial energy and the capacity of the demon, this phase generates solutions in the cost interval [E S D MAX + D I,E S +D I ]. D I and D MAX are, therefore, the main parameters to be considered in the implementation of the sampling. In the original formulation of the algorithm, such parameters were taken, at each sampling phase, as fixed fractions of the final cost obtained in the previous initialization. 7 As one of the contributions of the present work, we have proposed an adaptive strategy for the determination of such parameters: taking the list of rejected movements compiled in the initialization phase (see above), we have sorted it in growing order of the cost jumps, choosing two of its lower entries as the values of demon capacity and initial energy. The idea is that such values will be representative of the hills found in the landscape of the region being searched in the solution space, and will thus be adequate for defining the magnitude of the perturbations required for the evolution of the current solution, in the sampling phase. In our implementations of µo for the TSP, the initialization was executed until a count of 1n consecutively rejected moves was reached, where n was the number

4 136 A. Linhares & J. R. A. Torreão of cities in the problem. The values of D MAX and D I were both usually taken as equal to the th lowest entry in the list of rejected moves compiled in the initialization, except for a certain kind of city distribution which required a change in this prescription (see Sec. 4). The sampling phase was run for only iterations, and the algorithm was made to stop when reaching a count of 1 moves without improvement in the best solution encountered. 3. Alternative Strategies In our experiments, we compared the performance of µo to those yielded by alternative strategies: simulated annealing, microcanonical annealing, Tsallis annealing and tabu search. Here we discuss some of the features of the implementation of such algorithms in our work Simulated annealing (SA) Simulated annealing, as proposed by Kirkpatrick et al., 4 consists in the iterated implementation of the Metropolis algorithm, 12 for a sequence of decreasing temperatures. The Metropolis algorithm is a computational procedure, long known in statistical physics, which generates samples of the states of a physical system at a fixed temperature. Since such a system obeys the Gibbs distribution, the states generated at low temperatures will be low energy states. 13,14 Identifying the energy of the system with the cost function in an optimization problem, Kirkpatrick et al. proposed the following optimization strategy: Starting from an arbitrary solution, and a high temperature, the Metropolis algorithm is implemented, which means that moves are proposed which are accepted with probability p =min(1,exp ( E/T)), where E is the cost variation incurred, and T is the current temperature. After a large number of iterations, the value of T is decreased, and the process is repeated until T. The initial value and rate of decrease of the temperature (which has no physical meaning in the optimization, being just a global control parameter of the process) constitute the annealing schedule of the algorithm. In our implementations, we followed the prescriptions by Cerny, 1 taking the temperature to decrease by 7% of its value at each annealing step, and keeping it constant for 1n accepted moves or 1n rejected moves, whichever came first, with n being the number of cities in the problem. The initial temperature was empirically determined: 1 trial moves, starting from the initial random solution, were analyzed, and the initial temperature was chosen greater than the maximum cost variation observed Tsallis annealing This corresponds to a variant of simulated annealing, based on the statistics proposed by C. Tsallis. 16 Here, the acceptance probability of the Metropolis algorithm is generalized to p =min(1,[1 (1 q) E/T] 1/(1 q) ), such that SA is recovered in the limit of q 1. By appropriately choosing the value of q, it has been claimed 3

5 Microcanonical Optimization Applied to the Traveling Salesman Problem 137 that this algorithm can produce plausible TSP solutions in fewer steps than with fast simulated annealing. In our implementations, we followed the general annealing prescriptions described above for SA. As for the parameter q, specific to the Tsallis annealing, it has been suggested that the algorithm improves, in what concerns execution times, as q decreases towards. 3 Such general behavior was confirmed in our work, but, even though an exhaustive analysis has not been undertaken, we noticed a corresponding degradation in solution quality for q< 1. The value q = 1 was therefore employed in our experiments Microcanonical annealing (MA) This algorithm also corresponds to a variant of simulated annealing, now based on a simulation of the states of a physical system at fixed energy, through the Creutz algorithm, 6 instead of at fixed temperature (The SA and Tsallis algorithms would thus correspond to canonical annealings). As originally proposed for visual processing applications, MA employed a lattice of demons, and was suited only for parallel implementations. In our single-demon sequential version, microcanonical annealing consists, basically, in the iterative application of the Creutz algorithm for progressively lower values of demon capacity. In our implementations, we took a demon of zero initial energy, such that, at the ith annealing step, states would be generated in the cost interval [E (i 1) D (i),e (i 1) ], where D (i) represents the current demon capacity, and E (i 1) represents the final energy reached in the previous annealing step. The rate of decrease of the demon capacity was the same used in the canonical annealings for temperature decrease, with the initial demon value determined similarly to the initial annealing temperature: starting from a random solution, 1 prospective moves were analyzed, and the largest cost variation was taken as the demon capacity in the first annealing step Tabu search In order to avoid getting entrapped in a local minimum, the tabu search algorithm selects, at each step, the best of a certain number of candidate moves (, in our implementations), even if it leads to a higher cost, in which case the corresponding reverse move is included in a tabu list, to prevent the return to a solution already considered. In our experiments, we worked with a tabu list of 7 moves, following the suggestion of Glover, 1 with each new tabu move being included in a random position in the list, so that its interdiction period would also be random. Another feature of our implementations was a so-called aspiration criterium, according to which, if a given tabu move leads to a solution which tops the best one so far encountered, its interdiction is ignored. The tabu search was made to stop at a count of moves without improvement.

6 138 A. Linhares & J. R. A. Torreão 4. Experiments Our performance evaluation of µo was based on the solution of several instances of the euclidean TSP, employing a path-reversal dynamics. 17 This means that the solution cost was taken as the total tour length measured by the euclidean norm, and that each trial move was a replacement of a randomly selected section of the tour by its reverse. Results obtained with a Pentium 133 processor will be reported here, for the following city distributions: P1: 1 cities organized in a rectangular grid. Such a distribution, also employed by Cerny 1 and by Laarhoven and Aarts, 14 displays a global minimum which can be easily perceived, and is an example of degenerate topology, allowing many solutions of the same cost. P3: 3 cities randomly distributed in eight distinct clusters along the sides of a square region. The optimal path which is not known a priori must cross each cluster only once. PR76, PR124 and PR423: Configurations of 76, 124 and 423 cities, respectively, proposed by Padberg and Rinaldi, and compiled in the TSPLIB library. 18 The corresponding optimal solutions are also shown in the TSPLIB. K: Configuration of cities proposed by Krolak and also found, along with its optimal solution, in the TSPLIB. In order to appreciate the quality of the solutions yielded by the various algorithms, we considered the distribution of the results obtained in several runs. The frequency histograms of the final costs for P1 and P3, in executions, are shown in Figs. 1 and 2, where we include the results for the iterative improvement algorithm, which corresponds to implementing only the non-aggressive initialization phase of µo. From the figures, the superiority of the microcanonical optimization over the other approaches is apparent, but the tabu search and microcanonical annealing methods also proved to be competitive. SA and Tsallis annealing yielded poorer quality solutions, even though the latter was very fast. Table 1 gives an idea of the average running times involved. It is important to remark that, due to the peculiarities of implementation of each algorithm, some of them tend naturally to prolong their execution in comparison to others. For instance, µo and tabu search will only stop after reaching a certain number of iterations without improvement, which means that, even after a long period without any progress, once those algorithms find a better configuration, they are granted an additional running time (of 1 iterations for µo, and for tabu search). The same is not true of the annealing strategies, which have their running times linked to fixed annealing schedules.

7 Microcanonical Optimization Applied to the Traveling Salesman Problem 139 Iterative Improvement Simulated Annealing Microcanonical Annealing Tsallis Annealing Tabu Search Microcanonical Optimization Fig. 1., in fifty runs, of the final costs obtained for Problem P Microcanonical Annealing Iterative Improvement Tabu Search Simulated Annealing Tsallis Annealing Microcanonical Optimization Fig. 2., in fifty runs, of the final costs obtained for Problem P3.

8 14 A. Linhares & J. R. A. Torreão Table 1. Average execution time (minutes), in five runs of µo, tabu search, microcanonical annealing (MA), Tsallis annealing, and simulated annealing (SA). Processor: Pentium 133. µ Tabu MA Tsallis SA P1 :48 :8 :47 1: 2:37 P3 2: 3:29 2:9 1:48 4:4 From such initial results, we have been led to undertake a more careful comparative analysis of µo, tabu search and microcanonical annealing. Table 2 summarizes the results obtained in runs for the distributions K and PR76. The corresponding graphs of running time versus final cost for K are depicted in Fig. 3. We see that the microcanonical annealing did not show any appreciable variation in execution time, even though it performed quite poorly, in this respect, in problem K. Tabu search, on the other hand, showed a behavior similar to that of µo, a feature which was observed for all configurations where the cities were evenly distributed over the plane, without the formation of well-defined groups. The solutions yielded by µo were slightly superior to those generated by the annealing, but required a little longer processing time in K. A different situation was met in problems PR124 and PR439, which share the peculiar characteristic of presenting relatively distant groups of densely packed cities, in a topology quite distinct from the ones previously analyzed. Such topology gives rise to the existence of a large number of local-minimum solutions, differing only in the intra-group sequences of cities, which are very close in cost. In this kind of problem, the intrinsic divide-and-conquer nature of annealing 4,6 proves to be quite invaluable, since it allows the initial optimization of the long paths between groups which are dominant in terms of cost leaving the finer details of Table 2. Average, maximum, and minimum values obtained in runs of µo, tabu search, and microcanonical annealing (MA), for problems K and PR76. E means cost and t means execution time, in minutes. Processor: Pentium 133. K E avg E min E max t avg t min t max µo : :9 4: Tabu :28 :2 2:3 MA :42 8:33 8:48 PR76 E avg E min E max t avg t min t max µo :42 2:34 :43 Tabu :38 2:37 11:43 MA :32 3:27 3:37

9 Microcanonical Optimization Applied to the Traveling Salesman Problem 141 :11:31 Microcanonical Optimization :1: :8:38 :7:12 Time ::46 :4:19 :2:3 :1:26 :: :11:31 Microcanonical Annealing :1: :8:38 :7:12 Time ::46 :4:19 :2:3 :1:26 :: :11:31 Tabu Search :1: :8:38 :7:12 Time ::46 :4:19 :2:3 :1:26 :: Fig. 3. Execution times versus final costs obtained in fifty runs for K. Times in minutes. the intra-group paths for posterior processing. In contrast to that, tabu search, by accepting, at each step, the least expensive move (as long as it is not tabu), restricts itself, most of the time, to short-scale changes in the solutions. Therefore, it has difficulty in processing the large-scale corrections of the paths between groups. Similarly, µo finds it hard to evolve in such topology, unless the demon parameters are chosen large enough to accomodate large-scale rearrangements. For this reason, in our implementations for PR124 and PR439, instead of the th entry in the list of rejected moves, we had to choose, for the demon parameters, the th term there. As illustrated in Fig. 4, for PR439, tabu search, which received no special tuning for this particular situation, fared worse in those problems.

10 142 A. Linhares & J. R. A. Torreão :: Microcanonical Optimization Time :17:17 :8:38 :: :: Microcanonical Annealing :17:17 Time :8:38 :: :: Tabu Search :17:17 Time :8:38 :: Fig. 4. Execution times versus final costs obtained in fifty runs for PR439. Times in minutes. It is interesting, in this respect, to remark that µo seems to be more efficient than tabu search in breaking loose from local minimum configurations. The curves in Fig., obtained for problem P3, illustrate this. The plots show the values of the current solution and of the best solution so far encountered, as the algorithms evolve. The tabu heuristic, once in a local minimum, accepts the best of the proposed moves, irrespective of its cost. Since moves which are quite bad can thus be accepted repeatedly, the heuristic tends to stray from the best solution so far obtained. This should be compared to the behavior of µo, where the limited capacity of the demon keeps the current and the best solutions always close. This, nevertheless, does not seem to compromise the quality of the overall optimization: the algorithm is able

11 Microcanonical Optimization Applied to the Traveling Salesman Problem Tabu Search Steps Microcanonical Optimization Steps Fig.. Comparative evolution of current solution (fine line) and best solution (thick line), at each implementation step, for P3. Microcanonical Optimization Tabu Search Fig. 6., in fifty runs, of the final costs obtained for Problem P3, with execution time limited to 3 min.

12 144 A. Linhares & J. R. A. Torreão to find a way to a near-optimal solution, passing only through intermediary states which are approximately local minima. Finally, since the quality of the final results is also a function of the execution time, and since µo and tabu search obey different stopping criteria, we also compared their performance in limited-time implementations. The distributions of results obtained in runs for P3, with a time limit of 3 min, are shown in Fig. 6, which makes clear, once again, the better performance of µo.. Conclusions We have presented an analysis of the performance of a new heuristic the microcanonical optimization algorithm, µo when applied to the euclidean traveling salesman problem. When confronted with alternative approaches to the TSP (simulated annealing, microcanonical annealing, Tsallis annealing and tabu search), µo yielded the best overall results in our experiments. We have found it to be consistently faster than simulated annealing and consistently superior, in terms of solution quality, to the Tsallis annealing, even though the latter proved to be an efficient strategy for finding plausible solutions in short running times, as already claimed. 3 Microcanonical annealing and tabu search also performed well in our analysis. Due to the adaptive divide-and-conquer nature of the annealing, MA was able to outperform tabu search (though not µo), in what concerns the quality of the solutions, in certain problems with highly non-uniform city distributions, which require a scale-dependent processing. In most of the other experiments, tabu search proved itself the closest competitor to µo, yielding slightly inferior results in comparable execution times. We conclude that µo is a very promising heuristic for combinatorial optimization problems, as demonstrated by its extremely robust and efficient performance in the benchmark application of the TSP. References 1. V. Cerny, J. Optimization Theory and Applications 4, 41 (198). 2. J. J. Hopfield and D. W. Tank, Bio. Cyber. 2, 141 (198). 3. T. J. P. Penna, Phys. Rev. E1, 1 (199). 4. S. Kirkpatrick, D. C. Gellat, and M. Vecchi, Science 2, 671 (1983).. H. Szu and R. Hartley, Phys. Lett. A122, 7 (1987). 6. S. T. Barnard, Int. J. Comp. Vision 3, 17 (1989). 7. J. R. A. Torreão and E. Roe, Phys. Lett. A, 377 (199). 8. J. L. Fernandes and J. R. A. Torreão, in Lecture Notes in Computer Science Proc. 3rd. Asian Conf. on Computer Vision (Springer-Verlag, Heidelberg, 1998), to appear. 9. S. C. S. Porto, A. M. Barroso, and J. R. A. Torreão, in Proc. 2nd. Methaheuristics Int. Conf. (INRIA, Sophia-Antipolis, 1997), p F. Glover, ORSA J. Comp. 1, 19 (1989); ORSA J. Comp. 2, 4 (199). 11. M. Creutz, Phys. Rev. Lett., 1411 (1983). 12. N. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A. H. Teller, and E. Teller, The J. Chem. Phys. 21, 187 (193).

13 Microcanonical Optimization Applied to the Traveling Salesman Problem L. E. Reichl, A Modern Course on Statistical Physics (The University of Texas Press, Austin, 1986). 14. P. J. M. Laarhoven and E. H. L. Aarts, Simulated Annealing: Theory and Applications (Kluwer Academic Publishers, Amsterdam, 1987).. W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery, Numerical Recipes: The Art of Scientific Computing (Cambridge University Press, Cambridge, 1992). 16. C. Tsallis, J. Stat. Phys. 2, 479 (1988). 17. S. Lin and B. W. Kernighan, Op. Res. 21, 498 (1973). 18. G. Reinelt, ORSA J. Comp. 3, 376 (1991). Appendix A Here we present the pseudocode for the microcanonical optimization metaheuristic. µo algorithm Let maxcycle be the maximum number of iterations without improvement of the solution cost; repeat do Initialization; do Sampling; until (maxcycle is reached) end Fig. A.1. µo algorithm. procedure Initialization Empty list-of-rejected-moves; Let maxinit be the maximum number of consecutive rejected moves; Let s be the starting solution of the initilization phase; num rejmoves ; while (num rejmoves < maxinit) do Choose a move randomly; Call the new solution s ; Compute cost E of solution s; Compute cost E of solution s ; costchange E E if (costchange ) then Put costchange in the list-of-rejected-moves; num rejmoves num rejmoves +1; end if else num rejmoves s s

14 146 A. Linhares & J. R. A. Torreão end else end while end Fig. A.2. Initialization procedure. procedure Sampling Select D MAX and D I from the list-of-rejected-moves; Let maxsamp be the maximum number of sampling iterations; Let s be the starting solution of the sampling phase; num iter ; D D I while (num iter < maxsamp) do Choose a move randomly; Call the new solution s ; Compute cost E of solution s; Compute cost E of solution s ; costchange E E if (costchange ) then if (D costchange D MAX ) then s s ; D D costchange; end if end if else {costchange > } if (D costchange ) then s s ; D D costchange; end if end else num iter num iter +1; end while end Fig. A.3. Sampling procedure.

Single Solution-based Metaheuristics

Single Solution-based Metaheuristics Parallel Cooperative Optimization Research Group Single Solution-based Metaheuristics E-G. Talbi Laboratoire d Informatique Fondamentale de Lille Single solution-based metaheuristics Improvement of a solution.

More information

5. Simulated Annealing 5.1 Basic Concepts. Fall 2010 Instructor: Dr. Masoud Yaghini

5. Simulated Annealing 5.1 Basic Concepts. Fall 2010 Instructor: Dr. Masoud Yaghini 5. Simulated Annealing 5.1 Basic Concepts Fall 2010 Instructor: Dr. Masoud Yaghini Outline Introduction Real Annealing and Simulated Annealing Metropolis Algorithm Template of SA A Simple Example References

More information

SIMU L TED ATED ANNEA L NG ING

SIMU L TED ATED ANNEA L NG ING SIMULATED ANNEALING Fundamental Concept Motivation by an analogy to the statistical mechanics of annealing in solids. => to coerce a solid (i.e., in a poor, unordered state) into a low energy thermodynamic

More information

Motivation, Basic Concepts, Basic Methods, Travelling Salesperson Problem (TSP), Algorithms

Motivation, Basic Concepts, Basic Methods, Travelling Salesperson Problem (TSP), Algorithms Motivation, Basic Concepts, Basic Methods, Travelling Salesperson Problem (TSP), Algorithms 1 What is Combinatorial Optimization? Combinatorial Optimization deals with problems where we have to search

More information

Design and Analysis of Algorithms

Design and Analysis of Algorithms CSE 0, Winter 08 Design and Analysis of Algorithms Lecture 8: Consolidation # (DP, Greed, NP-C, Flow) Class URL: http://vlsicad.ucsd.edu/courses/cse0-w8/ Followup on IGO, Annealing Iterative Global Optimization

More information

Artificial Intelligence Heuristic Search Methods

Artificial Intelligence Heuristic Search Methods Artificial Intelligence Heuristic Search Methods Chung-Ang University, Jaesung Lee The original version of this content is created by School of Mathematics, University of Birmingham professor Sandor Zoltan

More information

Optimization Methods via Simulation

Optimization Methods via Simulation Optimization Methods via Simulation Optimization problems are very important in science, engineering, industry,. Examples: Traveling salesman problem Circuit-board design Car-Parrinello ab initio MD Protein

More information

PROBLEM SOLVING AND SEARCH IN ARTIFICIAL INTELLIGENCE

PROBLEM SOLVING AND SEARCH IN ARTIFICIAL INTELLIGENCE Artificial Intelligence, Computational Logic PROBLEM SOLVING AND SEARCH IN ARTIFICIAL INTELLIGENCE Lecture 4 Metaheuristic Algorithms Sarah Gaggl Dresden, 5th May 2017 Agenda 1 Introduction 2 Constraint

More information

Hill climbing: Simulated annealing and Tabu search

Hill climbing: Simulated annealing and Tabu search Hill climbing: Simulated annealing and Tabu search Heuristic algorithms Giovanni Righini University of Milan Department of Computer Science (Crema) Hill climbing Instead of repeating local search, it is

More information

Methods for finding optimal configurations

Methods for finding optimal configurations CS 1571 Introduction to AI Lecture 9 Methods for finding optimal configurations Milos Hauskrecht milos@cs.pitt.edu 5329 Sennott Square Search for the optimal configuration Optimal configuration search:

More information

( ) ( ) ( ) ( ) Simulated Annealing. Introduction. Pseudotemperature, Free Energy and Entropy. A Short Detour into Statistical Mechanics.

( ) ( ) ( ) ( ) Simulated Annealing. Introduction. Pseudotemperature, Free Energy and Entropy. A Short Detour into Statistical Mechanics. Aims Reference Keywords Plan Simulated Annealing to obtain a mathematical framework for stochastic machines to study simulated annealing Parts of chapter of Haykin, S., Neural Networks: A Comprehensive

More information

Introduction to Simulated Annealing 22c:145

Introduction to Simulated Annealing 22c:145 Introduction to Simulated Annealing 22c:145 Simulated Annealing Motivated by the physical annealing process Material is heated and slowly cooled into a uniform structure Simulated annealing mimics this

More information

Algorithm Design Strategies V

Algorithm Design Strategies V Algorithm Design Strategies V Joaquim Madeira Version 0.0 October 2016 U. Aveiro, October 2016 1 Overview The 0-1 Knapsack Problem Revisited The Fractional Knapsack Problem Greedy Algorithms Example Coin

More information

7.1 Basis for Boltzmann machine. 7. Boltzmann machines

7.1 Basis for Boltzmann machine. 7. Boltzmann machines 7. Boltzmann machines this section we will become acquainted with classical Boltzmann machines which can be seen obsolete being rarely applied in neurocomputing. It is interesting, after all, because is

More information

Simulated Annealing. Local Search. Cost function. Solution space

Simulated Annealing. Local Search. Cost function. Solution space Simulated Annealing Hill climbing Simulated Annealing Local Search Cost function? Solution space Annealing Annealing is a thermal process for obtaining low energy states of a solid in a heat bath. The

More information

A Two-Stage Simulated Annealing Methodology

A Two-Stage Simulated Annealing Methodology A Two-Stage Simulated Annealing Methodology James M. Varanelli and James P. Cohoon Department of Computer Science University of Virginia Charlottesville, VA 22903 USA Corresponding Author: James P. Cohoon

More information

Algorithms and Complexity theory

Algorithms and Complexity theory Algorithms and Complexity theory Thibaut Barthelemy Some slides kindly provided by Fabien Tricoire University of Vienna WS 2014 Outline 1 Algorithms Overview How to write an algorithm 2 Complexity theory

More information

Zebo Peng Embedded Systems Laboratory IDA, Linköping University

Zebo Peng Embedded Systems Laboratory IDA, Linköping University TDTS 01 Lecture 8 Optimization Heuristics for Synthesis Zebo Peng Embedded Systems Laboratory IDA, Linköping University Lecture 8 Optimization problems Heuristic techniques Simulated annealing Genetic

More information

Local Search & Optimization

Local Search & Optimization Local Search & Optimization CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2017 Soleymani Artificial Intelligence: A Modern Approach, 3 rd Edition, Chapter 4 Outline

More information

Simulated Annealing applied to the Traveling Salesman Problem. Eric Miller

Simulated Annealing applied to the Traveling Salesman Problem. Eric Miller Simulated Annealing applied to the Traveling Salesman Problem Eric Miller INTRODUCTION The traveling salesman problem (abbreviated TSP) presents the task of finding the most efficient route (or tour) through

More information

Heuristic Optimisation

Heuristic Optimisation Heuristic Optimisation Part 8: Simulated annealing Sándor Zoltán Németh http://web.mat.bham.ac.uk/s.z.nemeth s.nemeth@bham.ac.uk University of Birmingham S Z Németh (s.nemeth@bham.ac.uk) Heuristic Optimisation

More information

Lin-Kernighan Heuristic. Simulated Annealing

Lin-Kernighan Heuristic. Simulated Annealing DM63 HEURISTICS FOR COMBINATORIAL OPTIMIZATION Lecture 6 Lin-Kernighan Heuristic. Simulated Annealing Marco Chiarandini Outline 1. Competition 2. Variable Depth Search 3. Simulated Annealing DM63 Heuristics

More information

Methods for finding optimal configurations

Methods for finding optimal configurations S 2710 oundations of I Lecture 7 Methods for finding optimal configurations Milos Hauskrecht milos@pitt.edu 5329 Sennott Square S 2710 oundations of I Search for the optimal configuration onstrain satisfaction

More information

Lecture H2. Heuristic Methods: Iterated Local Search, Simulated Annealing and Tabu Search. Saeed Bastani

Lecture H2. Heuristic Methods: Iterated Local Search, Simulated Annealing and Tabu Search. Saeed Bastani Simulation Lecture H2 Heuristic Methods: Iterated Local Search, Simulated Annealing and Tabu Search Saeed Bastani saeed.bastani@eit.lth.se Spring 2017 Thanks to Prof. Arne Løkketangen at Molde University

More information

A.I.: Beyond Classical Search

A.I.: Beyond Classical Search A.I.: Beyond Classical Search Random Sampling Trivial Algorithms Generate a state randomly Random Walk Randomly pick a neighbor of the current state Both algorithms asymptotically complete. Overview Previously

More information

Markov Chain Monte Carlo. Simulated Annealing.

Markov Chain Monte Carlo. Simulated Annealing. Aula 10. Simulated Annealing. 0 Markov Chain Monte Carlo. Simulated Annealing. Anatoli Iambartsev IME-USP Aula 10. Simulated Annealing. 1 [RC] Stochastic search. General iterative formula for optimizing

More information

1 Heuristics for the Traveling Salesman Problem

1 Heuristics for the Traveling Salesman Problem Praktikum Algorithmen-Entwurf (Teil 9) 09.12.2013 1 1 Heuristics for the Traveling Salesman Problem We consider the following problem. We want to visit all the nodes of a graph as fast as possible, visiting

More information

6. APPLICATION TO THE TRAVELING SALESMAN PROBLEM

6. APPLICATION TO THE TRAVELING SALESMAN PROBLEM 6. Application to the Traveling Salesman Problem 92 6. APPLICATION TO THE TRAVELING SALESMAN PROBLEM The properties that have the most significant influence on the maps constructed by Kohonen s algorithm

More information

Simulated Annealing. 2.1 Introduction

Simulated Annealing. 2.1 Introduction Simulated Annealing 2 This chapter is dedicated to simulated annealing (SA) metaheuristic for optimization. SA is a probabilistic single-solution-based search method inspired by the annealing process in

More information

1a. Introduction COMP6741: Parameterized and Exact Computation

1a. Introduction COMP6741: Parameterized and Exact Computation 1a. Introduction COMP6741: Parameterized and Exact Computation Serge Gaspers 12 1 School of Computer Science and Engineering, UNSW Sydney, Asutralia 2 Decision Sciences Group, Data61, CSIRO, Australia

More information

Ant Colony Optimization: an introduction. Daniel Chivilikhin

Ant Colony Optimization: an introduction. Daniel Chivilikhin Ant Colony Optimization: an introduction Daniel Chivilikhin 03.04.2013 Outline 1. Biological inspiration of ACO 2. Solving NP-hard combinatorial problems 3. The ACO metaheuristic 4. ACO for the Traveling

More information

CS 331: Artificial Intelligence Local Search 1. Tough real-world problems

CS 331: Artificial Intelligence Local Search 1. Tough real-world problems S 331: rtificial Intelligence Local Search 1 1 Tough real-world problems Suppose you had to solve VLSI layout problems (minimize distance between components, unused space, etc.) Or schedule airlines Or

More information

Local Search & Optimization

Local Search & Optimization Local Search & Optimization CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2018 Soleymani Artificial Intelligence: A Modern Approach, 3 rd Edition, Chapter 4 Some

More information

Overview. Optimization. Easy optimization problems. Monte Carlo for Optimization. 1. Survey MC ideas for optimization: (a) Multistart

Overview. Optimization. Easy optimization problems. Monte Carlo for Optimization. 1. Survey MC ideas for optimization: (a) Multistart Monte Carlo for Optimization Overview 1 Survey MC ideas for optimization: (a) Multistart Art Owen, Lingyu Chen, Jorge Picazo (b) Stochastic approximation (c) Simulated annealing Stanford University Intel

More information

SYSTEMS SCIENCE AND CYBERNETICS Vol. III Simulated Annealing: From Statistical Thermodynamics to Combinatory Problems Solving - D.

SYSTEMS SCIENCE AND CYBERNETICS Vol. III Simulated Annealing: From Statistical Thermodynamics to Combinatory Problems Solving - D. SIMULATED ANNEALING: FROM STATISTICAL THERMODYNAMICS TO COMBINATORY PROBLEMS SOLVING D. Thiel ENITIAA, Nantes, France Keywords: Combinatory Problem, Optimizing Problem, Global Search Method, Statistical

More information

Simulated Annealing for Constrained Global Optimization

Simulated Annealing for Constrained Global Optimization Monte Carlo Methods for Computation and Optimization Final Presentation Simulated Annealing for Constrained Global Optimization H. Edwin Romeijn & Robert L.Smith (1994) Presented by Ariel Schwartz Objective

More information

Quantum annealing for problems with ground-state degeneracy

Quantum annealing for problems with ground-state degeneracy Proceedings of the International Workshop on Statistical-Mechanical Informatics September 14 17, 2008, Sendai, Japan Quantum annealing for problems with ground-state degeneracy Yoshiki Matsuda 1, Hidetoshi

More information

Metaheuristics and Local Search

Metaheuristics and Local Search Metaheuristics and Local Search 8000 Discrete optimization problems Variables x 1,..., x n. Variable domains D 1,..., D n, with D j Z. Constraints C 1,..., C m, with C i D 1 D n. Objective function f :

More information

Local and Stochastic Search

Local and Stochastic Search RN, Chapter 4.3 4.4; 7.6 Local and Stochastic Search Some material based on D Lin, B Selman 1 Search Overview Introduction to Search Blind Search Techniques Heuristic Search Techniques Constraint Satisfaction

More information

Acceptance Driven Local Search and Evolutionary Algorithms

Acceptance Driven Local Search and Evolutionary Algorithms Acceptance Driven Local Search and Evolutionary Algorithms Eric Poupaert and Yves Deville Department of Computing Science and Engineering Université catholique de Louvain Place Sainte Barbe 2, B-1348 Louvain-la-Neuve,

More information

Fundamentals of Metaheuristics

Fundamentals of Metaheuristics Fundamentals of Metaheuristics Part I - Basic concepts and Single-State Methods A seminar for Neural Networks Simone Scardapane Academic year 2012-2013 ABOUT THIS SEMINAR The seminar is divided in three

More information

Metaheuristics and Local Search. Discrete optimization problems. Solution approaches

Metaheuristics and Local Search. Discrete optimization problems. Solution approaches Discrete Mathematics for Bioinformatics WS 07/08, G. W. Klau, 31. Januar 2008, 11:55 1 Metaheuristics and Local Search Discrete optimization problems Variables x 1,...,x n. Variable domains D 1,...,D n,

More information

Solving the Homogeneous Probabilistic Traveling Salesman Problem by the ACO Metaheuristic

Solving the Homogeneous Probabilistic Traveling Salesman Problem by the ACO Metaheuristic Solving the Homogeneous Probabilistic Traveling Salesman Problem by the ACO Metaheuristic Leonora Bianchi 1, Luca Maria Gambardella 1 and Marco Dorigo 2 1 IDSIA, Strada Cantonale Galleria 2, CH-6928 Manno,

More information

Finding optimal configurations ( combinatorial optimization)

Finding optimal configurations ( combinatorial optimization) CS 1571 Introduction to AI Lecture 10 Finding optimal configurations ( combinatorial optimization) Milos Hauskrecht milos@cs.pitt.edu 539 Sennott Square Constraint satisfaction problem (CSP) Constraint

More information

12. LOCAL SEARCH. gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria

12. LOCAL SEARCH. gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria 12. LOCAL SEARCH gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria Lecture slides by Kevin Wayne Copyright 2005 Pearson-Addison Wesley h ttp://www.cs.princeton.edu/~wayne/kleinberg-tardos

More information

Spin Glas Dynamics and Stochastic Optimization Schemes. Karl Heinz Hoffmann TU Chemnitz

Spin Glas Dynamics and Stochastic Optimization Schemes. Karl Heinz Hoffmann TU Chemnitz Spin Glas Dynamics and Stochastic Optimization Schemes Karl Heinz Hoffmann TU Chemnitz 1 Spin Glasses spin glass e.g. AuFe AuMn CuMn nobel metal (no spin) transition metal (spin) 0.1-10 at% ferromagnetic

More information

Metaheuristics. 2.3 Local Search 2.4 Simulated annealing. Adrian Horga

Metaheuristics. 2.3 Local Search 2.4 Simulated annealing. Adrian Horga Metaheuristics 2.3 Local Search 2.4 Simulated annealing Adrian Horga 1 2.3 Local Search 2 Local Search Other names: Hill climbing Descent Iterative improvement General S-Metaheuristics Old and simple method

More information

3D HP Protein Folding Problem using Ant Algorithm

3D HP Protein Folding Problem using Ant Algorithm 3D HP Protein Folding Problem using Ant Algorithm Fidanova S. Institute of Parallel Processing BAS 25A Acad. G. Bonchev Str., 1113 Sofia, Bulgaria Phone: +359 2 979 66 42 E-mail: stefka@parallel.bas.bg

More information

27 : Distributed Monte Carlo Markov Chain. 1 Recap of MCMC and Naive Parallel Gibbs Sampling

27 : Distributed Monte Carlo Markov Chain. 1 Recap of MCMC and Naive Parallel Gibbs Sampling 10-708: Probabilistic Graphical Models 10-708, Spring 2014 27 : Distributed Monte Carlo Markov Chain Lecturer: Eric P. Xing Scribes: Pengtao Xie, Khoa Luu In this scribe, we are going to review the Parallel

More information

Unit 1A: Computational Complexity

Unit 1A: Computational Complexity Unit 1A: Computational Complexity Course contents: Computational complexity NP-completeness Algorithmic Paradigms Readings Chapters 3, 4, and 5 Unit 1A 1 O: Upper Bounding Function Def: f(n)= O(g(n)) if

More information

Thermodynamical Approach to the Traveling Salesman Problem: An Efficient Simulation Algorithm I

Thermodynamical Approach to the Traveling Salesman Problem: An Efficient Simulation Algorithm I JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS: Vol. 45, No. l, JANUARY I985 Thermodynamical Approach to the Traveling Salesman Problem: An Efficient Simulation Algorithm I V. CERNY 2 Communicated by

More information

CS 781 Lecture 9 March 10, 2011 Topics: Local Search and Optimization Metropolis Algorithm Greedy Optimization Hopfield Networks Max Cut Problem Nash

CS 781 Lecture 9 March 10, 2011 Topics: Local Search and Optimization Metropolis Algorithm Greedy Optimization Hopfield Networks Max Cut Problem Nash CS 781 Lecture 9 March 10, 2011 Topics: Local Search and Optimization Metropolis Algorithm Greedy Optimization Hopfield Networks Max Cut Problem Nash Equilibrium Price of Stability Coping With NP-Hardness

More information

Generalization of Dominance Relation-Based Replacement Rules for Memetic EMO Algorithms

Generalization of Dominance Relation-Based Replacement Rules for Memetic EMO Algorithms Generalization of Dominance Relation-Based Replacement Rules for Memetic EMO Algorithms Tadahiko Murata 1, Shiori Kaige 2, and Hisao Ishibuchi 2 1 Department of Informatics, Kansai University 2-1-1 Ryozenji-cho,

More information

Scaling Up. So far, we have considered methods that systematically explore the full search space, possibly using principled pruning (A* etc.).

Scaling Up. So far, we have considered methods that systematically explore the full search space, possibly using principled pruning (A* etc.). Local Search Scaling Up So far, we have considered methods that systematically explore the full search space, possibly using principled pruning (A* etc.). The current best such algorithms (RBFS / SMA*)

More information

Modern WalkSAT algorithms

Modern WalkSAT algorithms Modern WalkSAT algorithms J. Rosti Summary of workshop presentation Special research course T-79.7003, Autumn 2007 Phase Transitions in Optimization Problems ABSTRACT In this workshop presentation summary

More information

Parameter estimation using simulated annealing for S- system models of biochemical networks. Orland Gonzalez

Parameter estimation using simulated annealing for S- system models of biochemical networks. Orland Gonzalez Parameter estimation using simulated annealing for S- system models of biochemical networks Orland Gonzalez Outline S-systems quick review Definition of the problem Simulated annealing Perturbation function

More information

The Traveling Salesman Problem: A Neural Network Perspective. Jean-Yves Potvin

The Traveling Salesman Problem: A Neural Network Perspective. Jean-Yves Potvin 1 The Traveling Salesman Problem: A Neural Network Perspective Jean-Yves Potvin Centre de Recherche sur les Transports Université de Montréal C.P. 6128, Succ. A, Montréal (Québec) Canada H3C 3J7 potvin@iro.umontreal.ca

More information

Informatik-Bericht Nr

Informatik-Bericht Nr F Informatik-Bericht Nr. 2007-4 Schriftenreihe Fachbereich Informatik, Fachhochschule Trier First Come, First Served Tour Scheduling with Priorities Heinz Schmitz and Sebastian Niemann Fachhochschule Trier,

More information

CS264: Beyond Worst-Case Analysis Lecture #4: Parameterized Analysis of Online Paging

CS264: Beyond Worst-Case Analysis Lecture #4: Parameterized Analysis of Online Paging CS264: Beyond Worst-Case Analysis Lecture #4: Parameterized Analysis of Online Paging Tim Roughgarden January 19, 2017 1 Preamble Recall our three goals for the mathematical analysis of algorithms: the

More information

Efficient Cryptanalysis of Homophonic Substitution Ciphers

Efficient Cryptanalysis of Homophonic Substitution Ciphers Efficient Cryptanalysis of Homophonic Substitution Ciphers Amrapali Dhavare Richard M. Low Mark Stamp Abstract Substitution ciphers are among the earliest methods of encryption. Examples of classic substitution

More information

12. LOCAL SEARCH. gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria

12. LOCAL SEARCH. gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria Coping With NP-hardness Q. Suppose I need to solve an NP-hard problem. What should I do? A. Theory says you re unlikely to find poly-time algorithm. Must sacrifice one of three desired features. Solve

More information

Microcanonical Mean Field Annealing: A New Algorithm for Increasing the Convergence Speed of Mean Field Annealing.

Microcanonical Mean Field Annealing: A New Algorithm for Increasing the Convergence Speed of Mean Field Annealing. Microcanonical Mean Field Annealing: A New Algorithm for Increasing the Convergence Speed of Mean Field Annealing. Hyuk Jae Lee and Ahmed Louri Department of Electrical and Computer Engineering University

More information

Heuristics for The Whitehead Minimization Problem

Heuristics for The Whitehead Minimization Problem Heuristics for The Whitehead Minimization Problem R.M. Haralick, A.D. Miasnikov and A.G. Myasnikov November 11, 2004 Abstract In this paper we discuss several heuristic strategies which allow one to solve

More information

Minicourse on: Markov Chain Monte Carlo: Simulation Techniques in Statistics

Minicourse on: Markov Chain Monte Carlo: Simulation Techniques in Statistics Minicourse on: Markov Chain Monte Carlo: Simulation Techniques in Statistics Eric Slud, Statistics Program Lecture 1: Metropolis-Hastings Algorithm, plus background in Simulation and Markov Chains. Lecture

More information

Introduction. An Introduction to Algorithms and Data Structures

Introduction. An Introduction to Algorithms and Data Structures Introduction An Introduction to Algorithms and Data Structures Overview Aims This course is an introduction to the design, analysis and wide variety of algorithms (a topic often called Algorithmics ).

More information

Sensitive Ant Model for Combinatorial Optimization

Sensitive Ant Model for Combinatorial Optimization Sensitive Ant Model for Combinatorial Optimization CAMELIA CHIRA cchira@cs.ubbcluj.ro D. DUMITRESCU ddumitr@cs.ubbcluj.ro CAMELIA-MIHAELA PINTEA cmpintea@cs.ubbcluj.ro Abstract: A combinatorial optimization

More information

Computational statistics

Computational statistics Computational statistics Combinatorial optimization Thierry Denœux February 2017 Thierry Denœux Computational statistics February 2017 1 / 37 Combinatorial optimization Assume we seek the maximum of f

More information

MONTE CARLO METHODS IN SEQUENTIAL AND PARALLEL COMPUTING OF 2D AND 3D ISING MODEL

MONTE CARLO METHODS IN SEQUENTIAL AND PARALLEL COMPUTING OF 2D AND 3D ISING MODEL Journal of Optoelectronics and Advanced Materials Vol. 5, No. 4, December 003, p. 971-976 MONTE CARLO METHODS IN SEQUENTIAL AND PARALLEL COMPUTING OF D AND 3D ISING MODEL M. Diaconu *, R. Puscasu, A. Stancu

More information

arxiv: v1 [cond-mat.stat-mech] 5 Jun 2008

arxiv: v1 [cond-mat.stat-mech] 5 Jun 2008 arxiv:86.154v1 [cond-mat.stat-mech] 5 Jun 28 The ground state energy of the Edwards-Anderson spin glass model with a parallel tempering Monte Carlo algorithm F Romá 1,2, S Risau-Gusman 1, A J Ramirez-Pastor

More information

CMOS Ising Computer to Help Optimize Social Infrastructure Systems

CMOS Ising Computer to Help Optimize Social Infrastructure Systems FEATURED ARTICLES Taking on Future Social Issues through Open Innovation Information Science for Greater Industrial Efficiency CMOS Ising Computer to Help Optimize Social Infrastructure Systems As the

More information

Tabu Search. Biological inspiration is memory the ability to use past experiences to improve current decision making.

Tabu Search. Biological inspiration is memory the ability to use past experiences to improve current decision making. Tabu Search Developed by Fred Glover in the 1970 s. Dr Glover is a business professor at University of Colorado at Boulder. Developed specifically as a combinatorial optimization tool. Biological inspiration

More information

On-Line Load Balancing

On-Line Load Balancing 2 On-Line Load Balancing Without proper scheduling and resource allocation, large queues at each processing operation cause an imbalanced production system: some machines are overloaded while some are

More information

Hertz, Krogh, Palmer: Introduction to the Theory of Neural Computation. Addison-Wesley Publishing Company (1991). (v ji (1 x i ) + (1 v ji )x i )

Hertz, Krogh, Palmer: Introduction to the Theory of Neural Computation. Addison-Wesley Publishing Company (1991). (v ji (1 x i ) + (1 v ji )x i ) Symmetric Networks Hertz, Krogh, Palmer: Introduction to the Theory of Neural Computation. Addison-Wesley Publishing Company (1991). How can we model an associative memory? Let M = {v 1,..., v m } be a

More information

An Effective Chromosome Representation for Evolving Flexible Job Shop Schedules

An Effective Chromosome Representation for Evolving Flexible Job Shop Schedules An Effective Chromosome Representation for Evolving Flexible Job Shop Schedules Joc Cing Tay and Djoko Wibowo Intelligent Systems Lab Nanyang Technological University asjctay@ntuedusg Abstract As the Flexible

More information

Artificial Intelligence. 3 Problem Complexity. Prof. Dr. Jana Koehler Fall 2016 HSLU - JK

Artificial Intelligence. 3 Problem Complexity. Prof. Dr. Jana Koehler Fall 2016 HSLU - JK Artificial Intelligence 3 Problem Complexity Prof. Dr. Jana Koehler Fall 2016 Agenda Computability and Turing Machines Tractable and Intractable Problems P vs. NP Decision Problems Optimization problems

More information

Statistics and Quantum Computing

Statistics and Quantum Computing Statistics and Quantum Computing Yazhen Wang Department of Statistics University of Wisconsin-Madison http://www.stat.wisc.edu/ yzwang Workshop on Quantum Computing and Its Application George Washington

More information

arxiv: v3 [physics.comp-ph] 22 Sep 2016

arxiv: v3 [physics.comp-ph] 22 Sep 2016 arxiv:1606.03815v3 [physics.comp-ph] 22 Sep 2016 Finding a Hadamard matrix by simulated annealing of spin-vectors Andriyan Bayu Suksmono School of Electrical Engineering and Informatics, Institut Teknologi

More information

Gaussian Distributions and Global. Optimization: An exploration into improved. performance

Gaussian Distributions and Global. Optimization: An exploration into improved. performance The University of Texas at Austin Freshman Research Initiative: Research Project Gaussian Distributions and Global Optimization: An exploration into improved performance Madhav Narayan supervised by Dr.

More information

Module 1: Analyzing the Efficiency of Algorithms

Module 1: Analyzing the Efficiency of Algorithms Module 1: Analyzing the Efficiency of Algorithms Dr. Natarajan Meghanathan Professor of Computer Science Jackson State University Jackson, MS 39217 E-mail: natarajan.meghanathan@jsums.edu What is an Algorithm?

More information

Data Structures in Java

Data Structures in Java Data Structures in Java Lecture 21: Introduction to NP-Completeness 12/9/2015 Daniel Bauer Algorithms and Problem Solving Purpose of algorithms: find solutions to problems. Data Structures provide ways

More information

Week Cuts, Branch & Bound, and Lagrangean Relaxation

Week Cuts, Branch & Bound, and Lagrangean Relaxation Week 11 1 Integer Linear Programming This week we will discuss solution methods for solving integer linear programming problems. I will skip the part on complexity theory, Section 11.8, although this is

More information

MCMC Simulated Annealing Exercises.

MCMC Simulated Annealing Exercises. Aula 10. Simulated Annealing. 0 MCMC Simulated Annealing Exercises. Anatoli Iambartsev IME-USP Aula 10. Simulated Annealing. 1 [Wiki] Salesman problem. The travelling salesman problem (TSP), or, in recent

More information

A Survey on Travelling Salesman Problem

A Survey on Travelling Salesman Problem A Survey on Travelling Salesman Problem Sanchit Goyal Department of Computer Science University of North Dakota Grand Forks, North Dakota 58203 sanchitgoyal01@gmail.com Abstract The Travelling Salesman

More information

Algorithms. NP -Complete Problems. Dong Kyue Kim Hanyang University

Algorithms. NP -Complete Problems. Dong Kyue Kim Hanyang University Algorithms NP -Complete Problems Dong Kyue Kim Hanyang University dqkim@hanyang.ac.kr The Class P Definition 13.2 Polynomially bounded An algorithm is said to be polynomially bounded if its worst-case

More information

Doubly-Rooted Stem-and-Cycle Ejection Chain Algorithm for the Asymmetric Traveling Salesman Problem

Doubly-Rooted Stem-and-Cycle Ejection Chain Algorithm for the Asymmetric Traveling Salesman Problem Doubly-Rooted Stem-and-Cycle Ejection Chain Algorithm for the Asymmetric Traveling Salesman Problem César Rego a*, Dorabela Gamboa b, Fred Glover c a b c School of Business Administration, University of

More information

Random Walks A&T and F&S 3.1.2

Random Walks A&T and F&S 3.1.2 Random Walks A&T 110-123 and F&S 3.1.2 As we explained last time, it is very difficult to sample directly a general probability distribution. - If we sample from another distribution, the overlap will

More information

Advanced sampling. fluids of strongly orientation-dependent interactions (e.g., dipoles, hydrogen bonds)

Advanced sampling. fluids of strongly orientation-dependent interactions (e.g., dipoles, hydrogen bonds) Advanced sampling ChE210D Today's lecture: methods for facilitating equilibration and sampling in complex, frustrated, or slow-evolving systems Difficult-to-simulate systems Practically speaking, one is

More information

A Hybrid Simulated Annealing with Kempe Chain Neighborhood for the University Timetabling Problem

A Hybrid Simulated Annealing with Kempe Chain Neighborhood for the University Timetabling Problem A Hybrid Simulated Annealing with Kempe Chain Neighborhood for the University Timetabling Problem Mauritsius Tuga, Regina Berretta and Alexandre Mendes School of Electrical Engineering and Computer Science

More information

A Self-Stabilizing Algorithm for Finding a Minimal Distance-2 Dominating Set in Distributed Systems

A Self-Stabilizing Algorithm for Finding a Minimal Distance-2 Dominating Set in Distributed Systems JOURNAL OF INFORMATION SCIENCE AND ENGINEERING 24, 1709-1718 (2008) A Self-Stabilizing Algorithm for Finding a Minimal Distance-2 Dominating Set in Distributed Systems JI-CHERNG LIN, TETZ C. HUANG, CHENG-PIN

More information

Static Load-Balancing Techniques for Iterative Computations on Heterogeneous Clusters

Static Load-Balancing Techniques for Iterative Computations on Heterogeneous Clusters Static Load-Balancing Techniques for Iterative Computations on Heterogeneous Clusters Hélène Renard, Yves Robert, and Frédéric Vivien LIP, UMR CNRS-INRIA-UCBL 5668 ENS Lyon, France {Helene.Renard Yves.Robert

More information

MatSci 331 Homework 4 Molecular Dynamics and Monte Carlo: Stress, heat capacity, quantum nuclear effects, and simulated annealing

MatSci 331 Homework 4 Molecular Dynamics and Monte Carlo: Stress, heat capacity, quantum nuclear effects, and simulated annealing MatSci 331 Homework 4 Molecular Dynamics and Monte Carlo: Stress, heat capacity, quantum nuclear effects, and simulated annealing Due Thursday Feb. 21 at 5pm in Durand 110. Evan Reed In this homework,

More information

A pruning pattern list approach to the permutation flowshop scheduling problem

A pruning pattern list approach to the permutation flowshop scheduling problem A pruning pattern list approach to the permutation flowshop scheduling problem Takeshi Yamada NTT Communication Science Laboratories, 2-4 Hikaridai, Seika-cho, Soraku-gun, Kyoto 619-02, JAPAN E-mail :

More information

Comments on An Improvement to the Brent s Method

Comments on An Improvement to the Brent s Method Comments on An Improvement to the Brent s Method Steven A. Stage IEM 8550 United Plaza Boulevard, Suite 501 Baton Rouge, Louisiana 70808-000, United States of America steve.stage@iem.com Abstract Zhang

More information

Intuitionistic Fuzzy Estimation of the Ant Methodology

Intuitionistic Fuzzy Estimation of the Ant Methodology BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 9, No 2 Sofia 2009 Intuitionistic Fuzzy Estimation of the Ant Methodology S Fidanova, P Marinov Institute of Parallel Processing,

More information

4 th ECADA Evolutionary Computation for the Automated Design of Algorithms GECCO WORKSHOP 2014

4 th ECADA Evolutionary Computation for the Automated Design of Algorithms GECCO WORKSHOP 2014 4 th ECADA Evolutionary Computation for the Automated Design of Algorithms GECCO WORKSHOP 2014 John Woodward jrw@cs.stir.ac.uk Jerry Swan jsw@cs.stir.ac.uk Earl Barr E.Barr@ucl.ac.uk Welcome + Outline

More information

Integer weight training by differential evolution algorithms

Integer weight training by differential evolution algorithms Integer weight training by differential evolution algorithms V.P. Plagianakos, D.G. Sotiropoulos, and M.N. Vrahatis University of Patras, Department of Mathematics, GR-265 00, Patras, Greece. e-mail: vpp

More information

Optimisation and Operations Research

Optimisation and Operations Research Optimisation and Operations Research Lecture 15: The Greedy Heuristic Matthew Roughan http://www.maths.adelaide.edu.au/matthew.roughan/ Lecture_notes/OORII/ School of

More information

arxiv:cond-mat/ v1 [cond-mat.mtrl-sci] 28 Oct 2004

arxiv:cond-mat/ v1 [cond-mat.mtrl-sci] 28 Oct 2004 Basin Hopping with Occasional Jumping arxiv:cond-mat/0410723v1 [cond-mat.mtrl-sci] 28 Oct 2004 Masao Iwamatsu 1 Department of Physics, General Education Center, Musashi Institute of Technology, Setagaya-ku,

More information

Physics 403. Segev BenZvi. Numerical Methods, Maximum Likelihood, and Least Squares. Department of Physics and Astronomy University of Rochester

Physics 403. Segev BenZvi. Numerical Methods, Maximum Likelihood, and Least Squares. Department of Physics and Astronomy University of Rochester Physics 403 Numerical Methods, Maximum Likelihood, and Least Squares Segev BenZvi Department of Physics and Astronomy University of Rochester Table of Contents 1 Review of Last Class Quadratic Approximation

More information

Artificial Intelligence Methods (G5BAIM) - Examination

Artificial Intelligence Methods (G5BAIM) - Examination Question 1 a) According to John Koza there are five stages when planning to solve a problem using a genetic program. What are they? Give a short description of each. (b) How could you cope with division

More information