Criticality and Parallelism in Combinatorial Optimization

Size: px
Start display at page:

Download "Criticality and Parallelism in Combinatorial Optimization"

Transcription

1 Criticality and Parallelism in Combinatorial Optimization William G. Macready Athanassios G. Siapas Stuart A. Kauffman SFI WORKING PAPER: SFI Working Papers contain accounts of scientific work of the author(s) and do not necessarily represent the views of the Santa Fe Institute. We accept papers intended for publication in peer-reviewed journals or proceedings volumes, but not papers that have already appeared in print. Except for papers by our external faculty, papers must be based on work done at SFI, inspired by an invited visit to or collaboration at SFI, or funded by an SFI grant. NOTICE: This working paper is included by permission of the contributing author(s) as a means to ensure timely distribution of the scholarly and technical work on a non-commercial basis. Copyright and all rights therein are maintained by the author(s). It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author's copyright. These works may be reposted only with the explicit permission of the copyright holder. SANTA FE INSTITUTE

2 Criticality and Parallelism in Combinatorial Optimization William G. Macready Athanassios G. Siapas y Stuart A. Kauman Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, NM y Dept. of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA June 2, 1995 Abstract We demonstrate the existence of a phase transition in combinatorial optimization problems. For many of these problems, as local search algorithms are parallelized, the quality of solutions rst improves and then sharply degrades to no better than random search. This transition can be successfully characterized using nite-size scaling, a technique borrowed from statistical physics. We demonstrate our results for a family of generalized spin-glass models and the Traveling Salesman Problem. We determine critical exponents, investigate the eects of noise, and discuss conditions for the existence of the phase transition. 1

3 Optimization tasks are common and very often uncommonly dicult. In most areas of science and engineering, from free energy minimization in physics to prot maximization in economics, the need to optimize is ubiquitous. In light of the importance and diculty of optimization, much eort has gone into developing eective algorithms for nding good optima. The methods of simulated annealing [1], genetic algorithms [2], and taboo search [3] are three of the most popular techniques, inspired by ideas from statistical mechanics, evolutionary biology, and articial intelligence respectively. All of these methods rely in part on constructing improved solutions by applying a local operator to a population of candidate solutions. Good solutions result from the accumulation of many benecial local modications applied one after another. An obvious speed up in the algorithm's performance can be obtained if we apply many local modications in parallel. Despite the promise that parallel algorithms hold, they have received less attention and little is known about them [4]. In this report we investigate the eects of parallelizing local search for combinatorial optimization. We demonstrate that for a wide class of search techniques increasing parallelism leads to better solutions faster. But only up to a certain point. At some degree of parallelism the quality of solutions abruptly degrades to that of random sampling. This transition is sharp and will be shown to share many ofthecharacteristics of thermodynamic phase transitions. We compute critical exponents that characterize this phase transition using nite-size scaling, a technique borrowed from statistical physics, and demonstrate our results on two important problems: energy minimization on NK energy functions [5], and tour length minimization for the traveling salesman problem (TSP) [6]. The NK model, a generalization of spinglass models [7], was chosen as one of the few general models capable of generating tunably dicult optimization tasks. The TSP is perhaps the most famous and well studied combinatorial optimization task and is often used as a test-bed for new ideas. Both problems, though quite dierent, show remarkably similar behavior. The NK model [5] denes a family of energy functions over a discrete search space. The search space consists of all possible congurations s fs 1 ::: s n g of N variables. If eachofthevariables can takeanyofapossible values, the search space is of size A N.We conne our investigation to the case A =2,we call our variables spins, and let them take onthevalues 1 due to their similarity with Ising spins which arise in models of magnetism. Our 2

4 local modications for the NK model are single spin ips where s i!;s i. Each spin, s i,makes a contribution to the total energy dependent on its own state and the state of K other spins. These K other spins may be selected at random or according to some specied topology. The total energy of a conguration, fsg, is dened by: Efsg = 1 N NX i=1 E i (s i s i1 s ik ) (1) By analogy with spin glasses [7] the local energy contributions, E i (s i s i1 s ik ), for each ofthe2 K+1 local spin congurations, fs i s i1 s ik g, are chosen at random from a uniform distribution over [0,1). However, by specializing E i and the selection of the K neighbors we can investigate specic optimization problems, including spin-glasses, graph-coloring, number partitioning etc. The number of other spins, K, that each spin interacts with varies from 0 to N ; 1 and controls the ruggedness of the energy landscape. In the K = 0 limit the spins are independent and there is a single minimum. As K increases the number of conicting constraints increases leading to multiple local minima that can trap local search algorithms. At thek = N ; 1 extreme, every spin aects every other spin, energies of adjacent congurations are uncorrelated, and there are exponentially many local minima. This K 1 limit is analytically tractable and has been studied by many authors [8]. We focus on simulated annealing [1] as our representative local search algorithm. In simulated annealing, local modications are accepted according to the Metropolis criterion [9], a method for simulating the evolution of a physical system in a heat bath to thermal equilibrium. A modication is applied and the resulting change in energy, E, is computed. The modication is always accepted if it lowers the energy, and accepted with probability p(e) = exp(;e=t) if E>0. T is a temperature parameter and controls the fraction of uphill moves that are accepted at zero temperature, T = 0, only downhill moves are accepted. It is important to occasionally accept uphill moves (i.e. T >0) to prevent trapping on poor local minima where all local modications raise the energy. Simulated annealing derives its name from the annealing, or gradual lowering, of the temperature parameter, so that uphill moves are accepted with decreasing frequency as the temperature lowers. 3

5 0.50 K=30 K=12 K=8 K=6 K=5 Energy K= τ Figure 1: Energy reached at the end of 500 generations versus for N = 5000 T = 0 and several dierent K values. The search space is vast, of size Ageneration is dened to be N update attempts, an average N of which are simultaneous. The average over 30 randomly generated landscapes is plotted for each value. Beyond the obvious computational speedups of applying many local modications simultaneously, parallel local search also allows escape from poor local optima. Even when only greedy moves are accepted (T = 0), allowing for simultaneous spin updates introduces the possibility of \mistakes": if the energy of s i depends on s j and both spins simultaneously attempt to ip, s i may conclude that E <0. However, such a decision may be based on outdated information, since s i made its decision assuming s j didn't ip. If s j did ip the energy of s i may actually go up causing E >0. We parameterize the degree of parallelism by 0 < 1 which denotes the probability that a spin attempts to ip. If there are N variables, then on the average N of them are updating under the local operator at the same time. The! 0 limit corresponds to a sequential algorithm, which at 4

6 T = 0 will result in the system eventually trapping into (usually poor) local optima. Maximal parallelism is obtained for = 1whenallvariables update simultaneously. At this extreme, the system typically cannot converge since too many of the spins use outdated information. The transition between these two extremes is surprisingly sharp [10]. As the parallelism is increased from 0 the system nds better solutions because it becomes increasingly easier to escape poor local optima. If the energy landscape is smooth enough (K <5) the asymptotic energy decreases monotonically with achieving a minimum at complete parallelism (for example the K = 2 curve in gure 1). However, if K > 5 this benet is realized up to a certain degree of parallelism c,beyond which there is an abrupt degradation in performance (Figure 1). This abrupt degradation is associated with an order-disorder transition in the update dynamics. Above the transition where the energy remains high, spins continue to ip indenitely. Below the transition, reaching low energy congurations is accompanied by a freezing of the spins (at T = 0) to particular values. This motivates the denition of the following order parameter which distinguishes the behavior on the two sides of the transition: Let p i be the probability that at equilibrium the i th spin will ip if asked. This is 0 if the spin is frozen and 0.5 if the spin is ipping randomly. We dene the order parameter for the i th spin to be its entropy, S i = ;p i log 2 p i ; (1 ; p i )log 2 (1 ; p i ). The order parameter, S, fortheentire P system is dened N as the average entropy per spin, S = (1=N) i=1 S i. Plots of the order parameter for K = 6 near the transition can be found in Figure 2(b). The order parameter is close to 0 below c and rises sharply towards 1 above c. The sharpness of the transition, as well as the location of c, depend upon the size N of the system. We use nite-size scaling [11], a method from statistical physics in which the observation of how the critical point c (N) changes with the size of the system gives direct evidence for critical behavior at the transition. As the size of the system N increases, the transition sharpens and the transition point shifts according to: c (N) ; c (1) N ;1= (2) Figure 2(a) shows the results of the nite-size scaling analysis for K = 6. The empirical observation behind this analysis is that suciently close to the critical point, systems of all sizes are indistinguishable except for an 5

7 (a) (b) τ c (N) S N=200 N=400 N= N -1/ν τ (c) S(y) N=200 N=400 N= y Figure 2: (a) c (N) vs. N ;1=, where 1:300:04, c (1) 0:7230:001, computed by a nonlinear least squares t from a collection of (N c (N)) values. Here we dene c (N) tobethe value at which the entropy is0:5. (b) order parameter of K = 6 energy6functions with dierent N values. (c) rescaled order parameter curves plotted with respect to y = N 1= ( ; c )= c instead of. Note that all curves collapse onto a universal (N-independent) curve.

8 0.50 Energy T=0.0 T= T= T= T= T= τ Figure 3: Dependence of the transition on temperature. Here N =800 K = 12, the temperature T is xed for each curve, and the average over 50 runs of the energy found at the end of G = 150 generations is plotted. overall change of scale. By dening a rescaled parameter as y = N 1= ( ; c (1))= c (1) (3) the rescaled curves fall on a universal (N-independent) curve (Fig. 2(c)). The existence, position, and sharpness of the transition shown in Figures 1,2 depend not only on N and K, but also on the temperature, T,and running time, G, of the algorithm. Figure 3 demonstrates that the transition persists for small temperatures and broadens as we move to higher temperatures. The length of time, G, weallow the algorithm to run also aects the transition [12]. Deep in the ordered regime ( c ) convergence is extremely fast as low entropy (frozen) regions quickly come to dominate and rapidly spread throughout the system (Figure 4(a)). In the other extreme ( c ) asymptotic convergence is also fast as high entropy regions quickly come to 7

9 dominate (Figure 4(d)). Close to the transition, there is a ght between the low and high entropy domains, neither of which quickly dominates (Figures 4(b) and (c)). This phenomenon is reminiscent of the critical slowing down seen in Monte Carlo simulations of physical phase transitions. Near the transition both low entropy (violet) and high entropy (yellow-red) clusters alternately come to percolate across the lattice. We gain insight into the transition by making an annealed approximation in which the energy function is randomly assigned anew after each local local move. The transition persists in the annealed model and occurs at the same point, c (K). It is interesting that a further simplication, obtained by making a mean-eld approximation where uctuations in the energy between sites are ignored (i.e. all E i = E), completely eliminates the transition. The energy increases smoothly with when uctuations are ignored. The uctuations drive the transition in the language of physics the transition is noise-induced. This can be understood simply. As noted previously, performance degrades due to spins making decisions with outdated information when their local environment changes. If, in spite of the outdated information, the environment doesn't change, spin ips will decrease the energy. Such cases are likely to arise in locally low energy congurations. Even if numerous spins in the local environment are given the chance to ip, it is unlikely many of them will since they are already at such alow energy. For those spins that do ip, it is likely they will have acted benecially lowering the energy further. Low energy uctuations thus tend to nucleate regions of lower energy. Similarly, high energy uctuations nucleate higher energy regions. This behavior is evident in the clusters of Figure 4 where high entropy/energy regions are yellow andlowentropy/energy regions are blue. Below c low energy uctuations dominate, crystallizing larger and larger low energy regions so that all spins eventually \freeze" into a very low energy state. Above c high energy uctuations win out and the order seen below c melts. Thus far we have described an abrupt transition in a family of optimization problems as the degree of parallelism is varied. Qualitatively, at least, the forces driving the transition are not unique to the NK model. The transition is driven by overlapping applications of the local operator. The degree to which one application undoes the benecial eects of the other will determine how much parallelism can be aorded. We address the genericity of the transition by investigating a very dierent optimization task. 8

10 (a) (b) (c) (d) Figure 4: Rendering of the entropy elds for a lattice of spins. The energy of each spin depends on the states of the nearest K = 12 spins in the lattice and the temperature is zero. The color of each spin represents the entropy computed over the last 40 of a total of 80 generations. A rainbow color map is used with the violet end of the spectrum corresponding to low entropy. The images correspond to (a)9 = 0:43, (b) = 0:46, (c) = 0:47, (d) = 0:51.

11 The second optimization task we consider is the classic traveling salesman problem. Though part of its importance lies in its role as benchmark for new theory and optimization techniques, the TSP and its variants have many practical applications ranging from printed circuit board design to X-ray crystallography toscheduling. The task is simple: nd the shortest tour passing through a set of cities, visiting each city only once. One of the most eective solution techniques for the TSP is due to Lin and Kernighan [14]. Their method relies on a local operator, k-opt, to improve solutions. The k-opt operator removes k edges between cities and replaces them with k new edges such that the new edges still form a valid tour. At each improvement, the Lin-Kernighan heuristic intelligently chooses an appropriate value of k to consider. Here we investigate the quality of solutions as a particular k-opt move is applied with increasing parallelism. The results we present do not depend on the exact form of the operator, but only on thefactthatithasalocalinteraction range. The operator we use is dened as follows: for city i we select k ; 1 other cities at random yielding a set fc i c i1 ::: c ik;1 g of k cities. This subset of cities are then cyclicly permuted within the tour. For example, if the tour starting at city 1 and visiting cities 2 3 ::: 10 1 in sequence is represented as f g and the subset of cities considered for a 3-opt move isf3 5 7g, then the resulting tour after the move isf g. The results for a set of N = 439 cities and various interaction ranges k are presented in gure 5. The results are exactly analogous to the ones presented for the NK model with k playing the role of K. For k 3 there is no transition, and better solutions are obtained as increases. This monotonic improvement arises from the fact that as increases it becomes easier to escape local optima. As we increase the range of the operator by increasing k, weeventually reach apoint where it is no longer possible to achieve locally optimal solutions for high values and an abrupt degradation of performance is observed after a certain degree c of parallelism. The location of c decreases with the range of the operator, being approximately c 0:85 for k = 4 and c 0:65 for k =5. We have also looked at inversions where the direction of the tour is reversed between two randomly chosen cities. This operator can make great changes especially if the selected cities are far apart in the tour. The transition for inversions islessthan c < 0:1. A study of the order parameter for TSP revealed behavior exactly analogous to that described for the NK model. 10

12 opt 4-opt 5-opt 6-opt Tour length τ Figure 5: Expected tour lengths under 3-opt, 4-opt and 5-opt local moves as a function of for a set of N = 439 cities called pr439.tsp. This tour is supplied in the TSPLIB package, which is available by anonymous ftp through elib.zib-berlin.de. The results are averaged over 30 initial starting points after having run 100 generations. 11

13 Conclusions We have discovered and characterized a phase transition that arises as local search operators on complicated spaces are parallelized. The existence of this phase transition places a sharp upper bound on the amount of useful parallelism in local search. Moreover, the results presented here provide the foundation for a general theory that would predict which problems could or could not be successfully parallelized. Such a theory would be based on an analysis of the synergistic/antagonistic eects of overlapping applications of local operators. These eects may result from the inherent ruggedness of the particular landscape representation of the problem (the NK case) or from the range of interaction of local operators (k-opt in the TSP case). A phase transition seen in the satisability of constraint satisfaction problems [15] has previously shown the importance of critical phenomena in arti- cial intelligence. The phase transition described here is quite distinct from this satisability transition and further demonstrates the importance critical behavior in many elds of optimization and articial intelligence. It is interesting to consider whether the critical behavior presented here arises in more general distributed systems of optimizing agents. Any time the decision of one agent relies on information contained in the state of another agent, the possibility exists for an abrupt degradation in performance as more agents act in parallel and the amount of \stale" information that exists due to nonzero information propagation delays increases. Phase transitions such as the one described in this report may be lurking in many human organizations. We would like to thank Dan Stein for useful discussions and comments, particularly his insights into the energy decrease leading up to the phase transition. We would also like to thank J. Tsitsiklis, N. Berker, G.J. Sussman, H. Abelson, K. Dahmen, and E. Hung for many useful suggestions. References [1] S. Kirkpatrick, C. D. Gelatt Jr., M. P. Vecchi, Science, 220, 671, (1983). [2] S. Forrest, Science, 261, 872, (1993). 12

14 [3] F. Glover, ORSA J. Comput., 1, 190, (1989), F. Glover, ibid. 2, 4, (1990), D. Cvijovic, J. Klinowski, Science, 267, 664, (1995). [4] R. Azencott, ed., Simulated Annealing Parallelization Techniques (John Wiley & Sons, New York, 1992). [5] S. A. Kauman, The Origins of Order, (Oxford University Press, 1993). [6] Gerhard Reinelt, The Traveling Salesman, computational solutions for TSP applications, (Springer-Verlag, Berlin, Heidelberg, 1994). [7] K. H. Fischer, J. A. Hertz, Spin Glasses, (Cambridge University Press, 1991). [8] B. Derrida, Phys. Rev. B, 24, 2613, (1981), C. A. Macken, A. S. Perelson, Proc. Natl. Acad. Sci. USA, 86, 6191, (1989), E. D. Weinberger, Phys. Rev. A, 44, , (1991). [9] N. A. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A. H. Teller ande. J. Teller, J. Chem. Phys. 21, 1087, (1953). [10] We stress that no such transition occurs for sequential dynamics when we investigate how the asymptotic energy depends upon search at a quenched, i.e. xed, temperature. The dependence of the asymptotic energy on T is markedly dierent from the dependence on parallelism. [11] R. J. Creswick et al, Introduction to Renormalization Group Methods in Physics, (Wiley & Sons, 1992). J.J. Binnet, et. al, The Theory of Critical Phenomena (Oxford University Press, Oxford, 1992). [12] In the numerical experiments reported in gures 1,2,3,5 we have waited suciently long that further increases of G will not noticeably aect the transition. [13] G. Weisbuch, D. Stauer, J. Physique. 48, 11, (1987). E.D. Weinberger, Phys. Rev. A. 44, 6399, (1991). [14] S. Lin, B.W. Kernighan, Operations Research, 21, , (1973). 13

15 [15] S. Kirkpatrick, B. Selman, Science, 264, 1297, (1994), P. Cheeseman, B. Kanefsky, W.M Taylor, Proc. of the 12th IJCAI, (1991). 14

5. Simulated Annealing 5.1 Basic Concepts. Fall 2010 Instructor: Dr. Masoud Yaghini

5. Simulated Annealing 5.1 Basic Concepts. Fall 2010 Instructor: Dr. Masoud Yaghini 5. Simulated Annealing 5.1 Basic Concepts Fall 2010 Instructor: Dr. Masoud Yaghini Outline Introduction Real Annealing and Simulated Annealing Metropolis Algorithm Template of SA A Simple Example References

More information

The Bootstrap is Inconsistent with Probability Theory

The Bootstrap is Inconsistent with Probability Theory The Bootstrap is Inconsistent with Probability Theory David H. Wolpert SFI WORKING PAPER: 1995-10-091 SFI Working Papers contain accounts of scientific work of the author(s) and do not necessarily represent

More information

On the Speed of Quantum Computers with Finite Size Clocks

On the Speed of Quantum Computers with Finite Size Clocks On the Speed of Quantum Computers with Finite Size Clocks Tino Gramss SFI WORKING PAPER: 1995-1-8 SFI Working Papers contain accounts of scientific work of the author(s) and do not necessarily represent

More information

Monte Carlo Simulation of the 2D Ising model

Monte Carlo Simulation of the 2D Ising model Monte Carlo Simulation of the 2D Ising model Emanuel Schmidt, F44 April 6, 2 Introduction Monte Carlo methods are a powerful tool to solve problems numerically which are dicult to be handled analytically.

More information

SIMU L TED ATED ANNEA L NG ING

SIMU L TED ATED ANNEA L NG ING SIMULATED ANNEALING Fundamental Concept Motivation by an analogy to the statistical mechanics of annealing in solids. => to coerce a solid (i.e., in a poor, unordered state) into a low energy thermodynamic

More information

(Anti-)Stable Points and the Dynamics of Extended Systems

(Anti-)Stable Points and the Dynamics of Extended Systems (Anti-)Stable Points and the Dynamics of Extended Systems P.-M. Binder SFI WORKING PAPER: 1994-02-009 SFI Working Papers contain accounts of scientific work of the author(s) and do not necessarily represent

More information

Renormalization Group Analysis of the Small-World Network Model

Renormalization Group Analysis of the Small-World Network Model Renormalization Group Analysis of the Small-World Network Model M. E. J. Newman D. J. Watts SFI WORKING PAPER: 1999-04-029 SFI Working Papers contain accounts of scientific work of the author(s) and do

More information

Tailoring Mutation to Landscape Properties William G. Macready? Bios Group L.P. 317 Paseo de Peralta Santa Fe, NM 87501 email: wgm@biosgroup.com Abstract. We present numerical results on Kauman's NK landscape

More information

Motivation, Basic Concepts, Basic Methods, Travelling Salesperson Problem (TSP), Algorithms

Motivation, Basic Concepts, Basic Methods, Travelling Salesperson Problem (TSP), Algorithms Motivation, Basic Concepts, Basic Methods, Travelling Salesperson Problem (TSP), Algorithms 1 What is Combinatorial Optimization? Combinatorial Optimization deals with problems where we have to search

More information

Population Annealing: An Effective Monte Carlo Method for Rough Free Energy Landscapes

Population Annealing: An Effective Monte Carlo Method for Rough Free Energy Landscapes Population Annealing: An Effective Monte Carlo Method for Rough Free Energy Landscapes Jon Machta SFI WORKING PAPER: 21-7-12 SFI Working Papers contain accounts of scientific work of the author(s) and

More information

The Constrainedness Knife-Edge. Toby Walsh. APES Group. Glasgow, Scotland.

The Constrainedness Knife-Edge. Toby Walsh. APES Group. Glasgow, Scotland. Abstract A general rule of thumb is to tackle the hardest part of a search problem rst. Many heuristics therefore try to branch on the most constrained variable. To test their eectiveness at this, we measure

More information

Optimization Methods via Simulation

Optimization Methods via Simulation Optimization Methods via Simulation Optimization problems are very important in science, engineering, industry,. Examples: Traveling salesman problem Circuit-board design Car-Parrinello ab initio MD Protein

More information

7.1 Basis for Boltzmann machine. 7. Boltzmann machines

7.1 Basis for Boltzmann machine. 7. Boltzmann machines 7. Boltzmann machines this section we will become acquainted with classical Boltzmann machines which can be seen obsolete being rarely applied in neurocomputing. It is interesting, after all, because is

More information

adap-org/ Jan 1994

adap-org/ Jan 1994 Self-organized criticality in living systems C. Adami W. K. Kellogg Radiation Laboratory, 106{38, California Institute of Technology Pasadena, California 91125 USA (December 20,1993) adap-org/9401001 27

More information

PROBLEM SOLVING AND SEARCH IN ARTIFICIAL INTELLIGENCE

PROBLEM SOLVING AND SEARCH IN ARTIFICIAL INTELLIGENCE Artificial Intelligence, Computational Logic PROBLEM SOLVING AND SEARCH IN ARTIFICIAL INTELLIGENCE Lecture 4 Metaheuristic Algorithms Sarah Gaggl Dresden, 5th May 2017 Agenda 1 Introduction 2 Constraint

More information

On Information and Sufficiency

On Information and Sufficiency On Information and Sufficienc Huaiu hu SFI WORKING PAPER: 997-02-04 SFI Working Papers contain accounts of scientific work of the author(s) and do not necessaril represent the views of the Santa Fe Institute.

More information

Timo Latvala Landscape Families

Timo Latvala Landscape Families HELSINKI UNIVERSITY OF TECHNOLOGY Department of Computer Science Laboratory for Theoretical Computer Science T-79.300 Postgraduate Course in Theoretical Computer Science Timo Latvala Landscape Families

More information

A.I.: Beyond Classical Search

A.I.: Beyond Classical Search A.I.: Beyond Classical Search Random Sampling Trivial Algorithms Generate a state randomly Random Walk Randomly pick a neighbor of the current state Both algorithms asymptotically complete. Overview Previously

More information

Computational statistics

Computational statistics Computational statistics Combinatorial optimization Thierry Denœux February 2017 Thierry Denœux Computational statistics February 2017 1 / 37 Combinatorial optimization Assume we seek the maximum of f

More information

arxiv: v1 [cond-mat.stat-mech] 6 Mar 2008

arxiv: v1 [cond-mat.stat-mech] 6 Mar 2008 CD2dBS-v2 Convergence dynamics of 2-dimensional isotropic and anisotropic Bak-Sneppen models Burhan Bakar and Ugur Tirnakli Department of Physics, Faculty of Science, Ege University, 35100 Izmir, Turkey

More information

Exploiting a Theory of Phase Transitions in Three-Satisability. Problems. David M. Pennock and Quentin F. Stout. University ofmichigan

Exploiting a Theory of Phase Transitions in Three-Satisability. Problems. David M. Pennock and Quentin F. Stout. University ofmichigan Exploiting a Theory of Phase Transitions in Three-Satisability Problems David M. Pennock and Quentin F. Stout Electrical Engineering and Computer Science Department University ofmichigan Ann Arbor, MI

More information

Finite-temperature magnetism of ultrathin lms and nanoclusters PhD Thesis Booklet. Levente Rózsa Supervisor: László Udvardi

Finite-temperature magnetism of ultrathin lms and nanoclusters PhD Thesis Booklet. Levente Rózsa Supervisor: László Udvardi Finite-temperature magnetism of ultrathin lms and nanoclusters PhD Thesis Booklet Levente Rózsa Supervisor: László Udvardi BME 2016 Background of the research Magnetic materials continue to play an ever

More information

ground state degeneracy ground state energy

ground state degeneracy ground state energy Searching Ground States in Ising Spin Glass Systems Steven Homer Computer Science Department Boston University Boston, MA 02215 Marcus Peinado German National Research Center for Information Technology

More information

6. APPLICATION TO THE TRAVELING SALESMAN PROBLEM

6. APPLICATION TO THE TRAVELING SALESMAN PROBLEM 6. Application to the Traveling Salesman Problem 92 6. APPLICATION TO THE TRAVELING SALESMAN PROBLEM The properties that have the most significant influence on the maps constructed by Kohonen s algorithm

More information

arxiv:cond-mat/ v1 [cond-mat.stat-mech] 20 Jan 1997

arxiv:cond-mat/ v1 [cond-mat.stat-mech] 20 Jan 1997 arxiv:cond-mat/9701118v1 [cond-mat.stat-mech] 20 Jan 1997 Majority-Vote Cellular Automata, Ising Dynamics, and P-Completeness Cristopher Moore Santa Fe Institute 1399 Hyde Park Road, Santa Fe NM 87501

More information

Linearly-solvable Markov decision problems

Linearly-solvable Markov decision problems Advances in Neural Information Processing Systems 2 Linearly-solvable Markov decision problems Emanuel Todorov Department of Cognitive Science University of California San Diego todorov@cogsci.ucsd.edu

More information

Finding optimal configurations ( combinatorial optimization)

Finding optimal configurations ( combinatorial optimization) CS 1571 Introduction to AI Lecture 10 Finding optimal configurations ( combinatorial optimization) Milos Hauskrecht milos@cs.pitt.edu 539 Sennott Square Constraint satisfaction problem (CSP) Constraint

More information

Learning with Ensembles: How. over-tting can be useful. Anders Krogh Copenhagen, Denmark. Abstract

Learning with Ensembles: How. over-tting can be useful. Anders Krogh Copenhagen, Denmark. Abstract Published in: Advances in Neural Information Processing Systems 8, D S Touretzky, M C Mozer, and M E Hasselmo (eds.), MIT Press, Cambridge, MA, pages 190-196, 1996. Learning with Ensembles: How over-tting

More information

( ) ( ) ( ) ( ) Simulated Annealing. Introduction. Pseudotemperature, Free Energy and Entropy. A Short Detour into Statistical Mechanics.

( ) ( ) ( ) ( ) Simulated Annealing. Introduction. Pseudotemperature, Free Energy and Entropy. A Short Detour into Statistical Mechanics. Aims Reference Keywords Plan Simulated Annealing to obtain a mathematical framework for stochastic machines to study simulated annealing Parts of chapter of Haykin, S., Neural Networks: A Comprehensive

More information

Spin Glas Dynamics and Stochastic Optimization Schemes. Karl Heinz Hoffmann TU Chemnitz

Spin Glas Dynamics and Stochastic Optimization Schemes. Karl Heinz Hoffmann TU Chemnitz Spin Glas Dynamics and Stochastic Optimization Schemes Karl Heinz Hoffmann TU Chemnitz 1 Spin Glasses spin glass e.g. AuFe AuMn CuMn nobel metal (no spin) transition metal (spin) 0.1-10 at% ferromagnetic

More information

Optimizing Stochastic and Multiple Fitness Functions

Optimizing Stochastic and Multiple Fitness Functions Optimizing Stochastic and Multiple Fitness Functions Joseph L. Breeden SFI WORKING PAPER: 1995-02-027 SFI Working Papers contain accounts of scientific work of the author(s) and do not necessarily represent

More information

Potts And XY, Together At Last

Potts And XY, Together At Last Potts And XY, Together At Last Daniel Kolodrubetz Massachusetts Institute of Technology, Center for Theoretical Physics (Dated: May 16, 212) We investigate the behavior of an XY model coupled multiplicatively

More information

Lin-Kernighan Heuristic. Simulated Annealing

Lin-Kernighan Heuristic. Simulated Annealing DM63 HEURISTICS FOR COMBINATORIAL OPTIMIZATION Lecture 6 Lin-Kernighan Heuristic. Simulated Annealing Marco Chiarandini Outline 1. Competition 2. Variable Depth Search 3. Simulated Annealing DM63 Heuristics

More information

Dynamical Embodiments of Computation in Cognitive Processes James P. Crutcheld Physics Department, University of California, Berkeley, CA a

Dynamical Embodiments of Computation in Cognitive Processes James P. Crutcheld Physics Department, University of California, Berkeley, CA a Dynamical Embodiments of Computation in Cognitive Processes James P. Crutcheld Physics Department, University of California, Berkeley, CA 94720-7300 and Santa Fe Institute, 1399 Hyde Park Road, Santa Fe,

More information

Single Solution-based Metaheuristics

Single Solution-based Metaheuristics Parallel Cooperative Optimization Research Group Single Solution-based Metaheuristics E-G. Talbi Laboratoire d Informatique Fondamentale de Lille Single solution-based metaheuristics Improvement of a solution.

More information

chem-ph/ Feb 95

chem-ph/ Feb 95 LU-TP 9- October 99 Sequence Dependence of Self-Interacting Random Chains Anders Irback and Holm Schwarze chem-ph/9 Feb 9 Department of Theoretical Physics, University of Lund Solvegatan A, S- Lund, Sweden

More information

Design and Analysis of Algorithms

Design and Analysis of Algorithms CSE 0, Winter 08 Design and Analysis of Algorithms Lecture 8: Consolidation # (DP, Greed, NP-C, Flow) Class URL: http://vlsicad.ucsd.edu/courses/cse0-w8/ Followup on IGO, Annealing Iterative Global Optimization

More information

A Monte Carlo Implementation of the Ising Model in Python

A Monte Carlo Implementation of the Ising Model in Python A Monte Carlo Implementation of the Ising Model in Python Alexey Khorev alexey.s.khorev@gmail.com 2017.08.29 Contents 1 Theory 1 1.1 Introduction...................................... 1 1.2 Model.........................................

More information

and B. Taglienti (b) (a): Dipartimento di Fisica and Infn, Universita di Cagliari (c): Dipartimento di Fisica and Infn, Universita di Roma La Sapienza

and B. Taglienti (b) (a): Dipartimento di Fisica and Infn, Universita di Cagliari (c): Dipartimento di Fisica and Infn, Universita di Roma La Sapienza Glue Ball Masses and the Chameleon Gauge E. Marinari (a),m.l.paciello (b),g.parisi (c) and B. Taglienti (b) (a): Dipartimento di Fisica and Infn, Universita di Cagliari Via Ospedale 72, 09100 Cagliari

More information

[4] L. F. Cugliandolo, J. Kurchan and G. Parisi,O equilibrium dynamics and aging in

[4] L. F. Cugliandolo, J. Kurchan and G. Parisi,O equilibrium dynamics and aging in [4] L. F. Cugliandolo, J. Kurchan and G. Parisi,O equilibrium dynamics and aging in unfrustrated systems, cond-mat preprint (1994). [5] M. Virasoro, unpublished, quoted in [4]. [6] T. R. Kirkpatrick and

More information

Simulated Annealing. Local Search. Cost function. Solution space

Simulated Annealing. Local Search. Cost function. Solution space Simulated Annealing Hill climbing Simulated Annealing Local Search Cost function? Solution space Annealing Annealing is a thermal process for obtaining low energy states of a solid in a heat bath. The

More information

Local and Stochastic Search

Local and Stochastic Search RN, Chapter 4.3 4.4; 7.6 Local and Stochastic Search Some material based on D Lin, B Selman 1 Search Overview Introduction to Search Blind Search Techniques Heuristic Search Techniques Constraint Satisfaction

More information

Brazilian Journal of Physics, vol. 26, no. 4, december, P. M.C.deOliveira,T.J.P.Penna. Instituto de Fsica, Universidade Federal Fluminense

Brazilian Journal of Physics, vol. 26, no. 4, december, P. M.C.deOliveira,T.J.P.Penna. Instituto de Fsica, Universidade Federal Fluminense Brazilian Journal of Physics, vol. 26, no. 4, december, 1996 677 Broad Histogram Method P. M.C.deOliveira,T.J.P.Penna Instituto de Fsica, Universidade Federal Fluminense Av. Litor^anea s/n, Boa Viagem,

More information

Brazilian Journal of Physics, vol. 27, no. 4, december, with Aperiodic Interactions. Instituto de Fsica, Universidade de S~ao Paulo

Brazilian Journal of Physics, vol. 27, no. 4, december, with Aperiodic Interactions. Instituto de Fsica, Universidade de S~ao Paulo Brazilian Journal of Physics, vol. 27, no. 4, december, 1997 567 Critical Behavior of an Ising Model with periodic Interactions S. T. R. Pinho, T.. S. Haddad, S. R. Salinas Instituto de Fsica, Universidade

More information

Some Polyomino Tilings of the Plane

Some Polyomino Tilings of the Plane Some Polyomino Tilings of the Plane Cristopher Moore SFI WORKING PAPER: 1999-04-03 SFI Working Papers contain accounts of scientific work of the author(s) and do not necessarily represent the views of

More information

v n,t n

v n,t n THE DYNAMICAL STRUCTURE FACTOR AND CRITICAL BEHAVIOR OF A TRAFFIC FLOW MODEL 61 L. ROTERS, S. L UBECK, and K. D. USADEL Theoretische Physik, Gerhard-Mercator-Universitat, 4748 Duisburg, Deutschland, E-mail:

More information

Lecture 14 - P v.s. NP 1

Lecture 14 - P v.s. NP 1 CME 305: Discrete Mathematics and Algorithms Instructor: Professor Aaron Sidford (sidford@stanford.edu) February 27, 2018 Lecture 14 - P v.s. NP 1 In this lecture we start Unit 3 on NP-hardness and approximation

More information

The Landscape of the Traveling Salesman Problem. Peter F. Stadler y. Max Planck Institut fur Biophysikalische Chemie. 080 Biochemische Kinetik

The Landscape of the Traveling Salesman Problem. Peter F. Stadler y. Max Planck Institut fur Biophysikalische Chemie. 080 Biochemische Kinetik The Landscape of the Traveling Salesman Problem Peter F. Stadler y Max Planck Institut fur Biophysikalische Chemie Karl Friedrich Bonhoeer Institut 080 Biochemische Kinetik Am Fassberg, D-3400 Gottingen,

More information

= w 2. w 1. B j. A j. C + j1j2

= w 2. w 1. B j. A j. C + j1j2 Local Minima and Plateaus in Multilayer Neural Networks Kenji Fukumizu and Shun-ichi Amari Brain Science Institute, RIKEN Hirosawa 2-, Wako, Saitama 35-098, Japan E-mail: ffuku, amarig@brain.riken.go.jp

More information

Entropy-Driven Adaptive Representation. Justinian P. Rosca. University of Rochester. Rochester, NY

Entropy-Driven Adaptive Representation. Justinian P. Rosca. University of Rochester. Rochester, NY -Driven Adaptive Representation Justinian P. Rosca Computer Science Department University of Rochester Rochester, NY 14627 rosca@cs.rochester.edu Abstract In the rst genetic programming (GP) book John

More information

Zebo Peng Embedded Systems Laboratory IDA, Linköping University

Zebo Peng Embedded Systems Laboratory IDA, Linköping University TDTS 01 Lecture 8 Optimization Heuristics for Synthesis Zebo Peng Embedded Systems Laboratory IDA, Linköping University Lecture 8 Optimization problems Heuristic techniques Simulated annealing Genetic

More information

CS 781 Lecture 9 March 10, 2011 Topics: Local Search and Optimization Metropolis Algorithm Greedy Optimization Hopfield Networks Max Cut Problem Nash

CS 781 Lecture 9 March 10, 2011 Topics: Local Search and Optimization Metropolis Algorithm Greedy Optimization Hopfield Networks Max Cut Problem Nash CS 781 Lecture 9 March 10, 2011 Topics: Local Search and Optimization Metropolis Algorithm Greedy Optimization Hopfield Networks Max Cut Problem Nash Equilibrium Price of Stability Coping With NP-Hardness

More information

Learning About Spin Glasses Enzo Marinari (Cagliari, Italy) I thank the organizers... I am thrilled since... (Realistic) Spin Glasses: a debated, inte

Learning About Spin Glasses Enzo Marinari (Cagliari, Italy) I thank the organizers... I am thrilled since... (Realistic) Spin Glasses: a debated, inte Learning About Spin Glasses Enzo Marinari (Cagliari, Italy) I thank the organizers... I am thrilled since... (Realistic) Spin Glasses: a debated, interesting issue. Mainly work with: Giorgio Parisi, Federico

More information

1 Introduction Duality transformations have provided a useful tool for investigating many theories both in the continuum and on the lattice. The term

1 Introduction Duality transformations have provided a useful tool for investigating many theories both in the continuum and on the lattice. The term SWAT/102 U(1) Lattice Gauge theory and its Dual P. K. Coyle a, I. G. Halliday b and P. Suranyi c a Racah Institute of Physics, Hebrew University of Jerusalem, Jerusalem 91904, Israel. b Department ofphysics,

More information

Sensitive Ant Model for Combinatorial Optimization

Sensitive Ant Model for Combinatorial Optimization Sensitive Ant Model for Combinatorial Optimization CAMELIA CHIRA cchira@cs.ubbcluj.ro D. DUMITRESCU ddumitr@cs.ubbcluj.ro CAMELIA-MIHAELA PINTEA cmpintea@cs.ubbcluj.ro Abstract: A combinatorial optimization

More information

Methods for finding optimal configurations

Methods for finding optimal configurations CS 1571 Introduction to AI Lecture 9 Methods for finding optimal configurations Milos Hauskrecht milos@cs.pitt.edu 5329 Sennott Square Search for the optimal configuration Optimal configuration search:

More information

Thermodynamical Approach to the Traveling Salesman Problem: An Efficient Simulation Algorithm I

Thermodynamical Approach to the Traveling Salesman Problem: An Efficient Simulation Algorithm I JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS: Vol. 45, No. l, JANUARY I985 Thermodynamical Approach to the Traveling Salesman Problem: An Efficient Simulation Algorithm I V. CERNY 2 Communicated by

More information

Optimal Search on a Technology Landscape

Optimal Search on a Technology Landscape Optimal Search on a Technology Landscape Stuart A. Kauffman José Lobo William G. Macready SFI WORKING PAPER: 1998-10-091 SFI Working Papers contain accounts of scientific work of the author(s) and do not

More information

The Bias-Variance dilemma of the Monte Carlo. method. Technion - Israel Institute of Technology, Technion City, Haifa 32000, Israel

The Bias-Variance dilemma of the Monte Carlo. method. Technion - Israel Institute of Technology, Technion City, Haifa 32000, Israel The Bias-Variance dilemma of the Monte Carlo method Zlochin Mark 1 and Yoram Baram 1 Technion - Israel Institute of Technology, Technion City, Haifa 32000, Israel fzmark,baramg@cs.technion.ac.il Abstract.

More information

Simulated Annealing. 2.1 Introduction

Simulated Annealing. 2.1 Introduction Simulated Annealing 2 This chapter is dedicated to simulated annealing (SA) metaheuristic for optimization. SA is a probabilistic single-solution-based search method inspired by the annealing process in

More information

Boxlets: a Fast Convolution Algorithm for. Signal Processing and Neural Networks. Patrice Y. Simard, Leon Bottou, Patrick Haner and Yann LeCun

Boxlets: a Fast Convolution Algorithm for. Signal Processing and Neural Networks. Patrice Y. Simard, Leon Bottou, Patrick Haner and Yann LeCun Boxlets: a Fast Convolution Algorithm for Signal Processing and Neural Networks Patrice Y. Simard, Leon Bottou, Patrick Haner and Yann LeCun AT&T Labs-Research 100 Schultz Drive, Red Bank, NJ 07701-7033

More information

12. LOCAL SEARCH. gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria

12. LOCAL SEARCH. gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria Coping With NP-hardness Q. Suppose I need to solve an NP-hard problem. What should I do? A. Theory says you re unlikely to find poly-time algorithm. Must sacrifice one of three desired features. Solve

More information

The Evolutionary Unfolding of Complexity

The Evolutionary Unfolding of Complexity The Evolutionary Unfolding of Complexity James P. Crutchfield Erik van Nimwegen SFI WORKING PAPER: 1999-02-015 SFI Working Papers contain accounts of scientific work of the author(s) and do not necessarily

More information

Monte Carlo Lecture Notes II, Jonathan Goodman. Courant Institute of Mathematical Sciences, NYU. January 29, 1997

Monte Carlo Lecture Notes II, Jonathan Goodman. Courant Institute of Mathematical Sciences, NYU. January 29, 1997 Monte Carlo Lecture Notes II, Jonathan Goodman Courant Institute of Mathematical Sciences, NYU January 29, 1997 1 Introduction to Direct Sampling We think of the computer \random number generator" as an

More information

3D HP Protein Folding Problem using Ant Algorithm

3D HP Protein Folding Problem using Ant Algorithm 3D HP Protein Folding Problem using Ant Algorithm Fidanova S. Institute of Parallel Processing BAS 25A Acad. G. Bonchev Str., 1113 Sofia, Bulgaria Phone: +359 2 979 66 42 E-mail: stefka@parallel.bas.bg

More information

NP Completeness of Kauffman s N-k Model, a Tuneably Rugged Fitness Landscape

NP Completeness of Kauffman s N-k Model, a Tuneably Rugged Fitness Landscape P Completeness of Kauffman s -k Model, a Tuneably Rugged Fitness Landscape Edward D. Weinberger SFI WORKIG PAPER: 1996-02-003 SFI Working Papers contain accounts of scientific work of the author(s) and

More information

Lyapunov exponents in random Boolean networks

Lyapunov exponents in random Boolean networks Physica A 284 (2000) 33 45 www.elsevier.com/locate/physa Lyapunov exponents in random Boolean networks Bartolo Luque a;, Ricard V. Sole b;c a Centro de Astrobiolog a (CAB), Ciencias del Espacio, INTA,

More information

Ensemble equivalence for non-extensive thermostatistics

Ensemble equivalence for non-extensive thermostatistics Physica A 305 (2002) 52 57 www.elsevier.com/locate/physa Ensemble equivalence for non-extensive thermostatistics Raul Toral a;, Rafael Salazar b a Instituto Mediterraneo de Estudios Avanzados (IMEDEA),

More information

Learning in Boltzmann Trees. Lawrence Saul and Michael Jordan. Massachusetts Institute of Technology. Cambridge, MA January 31, 1995.

Learning in Boltzmann Trees. Lawrence Saul and Michael Jordan. Massachusetts Institute of Technology. Cambridge, MA January 31, 1995. Learning in Boltzmann Trees Lawrence Saul and Michael Jordan Center for Biological and Computational Learning Massachusetts Institute of Technology 79 Amherst Street, E10-243 Cambridge, MA 02139 January

More information

Temporally Asymmetric Fluctuations are Sufficient for the Biological Energy Transduction

Temporally Asymmetric Fluctuations are Sufficient for the Biological Energy Transduction Temporally Asymmetric Fluctuations are Sufficient for the Biological Energy Transduction Dante R. Chialvo Mark M. Millonas SFI WORKING PAPER: 1995-07-064 SFI Working Papers contain accounts of scientific

More information

Revisiting the Edge of Chaos: Evolving Cellular Automata to Perform. Santa Fe Institute Working Paper (Submitted to Complex Systems)

Revisiting the Edge of Chaos: Evolving Cellular Automata to Perform. Santa Fe Institute Working Paper (Submitted to Complex Systems) Revisiting the Edge of Chaos: Evolving Cellular Automata to Perform Computations Melanie Mitchell 1, Peter T. Hraber 1, and James P. Crutcheld 2 Santa Fe Institute Working Paper 93-3-14 (Submitted to Complex

More information

Lecture 15 - NP Completeness 1

Lecture 15 - NP Completeness 1 CME 305: Discrete Mathematics and Algorithms Instructor: Professor Aaron Sidford (sidford@stanford.edu) February 29, 2018 Lecture 15 - NP Completeness 1 In the last lecture we discussed how to provide

More information

Numerical Analysis of 2-D Ising Model. Ishita Agarwal Masters in Physics (University of Bonn) 17 th March 2011

Numerical Analysis of 2-D Ising Model. Ishita Agarwal Masters in Physics (University of Bonn) 17 th March 2011 Numerical Analysis of 2-D Ising Model By Ishita Agarwal Masters in Physics (University of Bonn) 17 th March 2011 Contents Abstract Acknowledgment Introduction Computational techniques Numerical Analysis

More information

Gene Pool Recombination in Genetic Algorithms

Gene Pool Recombination in Genetic Algorithms Gene Pool Recombination in Genetic Algorithms Heinz Mühlenbein GMD 53754 St. Augustin Germany muehlenbein@gmd.de Hans-Michael Voigt T.U. Berlin 13355 Berlin Germany voigt@fb10.tu-berlin.de Abstract: A

More information

Motivation. Lecture 23. The Stochastic Neuron ( ) Properties of Logistic Sigmoid. Logistic Sigmoid With Varying T. Logistic Sigmoid T = 0.

Motivation. Lecture 23. The Stochastic Neuron ( ) Properties of Logistic Sigmoid. Logistic Sigmoid With Varying T. Logistic Sigmoid T = 0. Motivation Lecture 23 Idea: with low probability, go against the local field move up the energy surface make the wrong microdecision Potential value for optimization: escape from local optima Potential

More information

Markov Chain Monte Carlo The Metropolis-Hastings Algorithm

Markov Chain Monte Carlo The Metropolis-Hastings Algorithm Markov Chain Monte Carlo The Metropolis-Hastings Algorithm Anthony Trubiano April 11th, 2018 1 Introduction Markov Chain Monte Carlo (MCMC) methods are a class of algorithms for sampling from a probability

More information

c(t) t (T-0.21) Figure 14: Finite-time scaling eq.(23) for the open case. Good scaling is obtained with

c(t) t (T-0.21) Figure 14: Finite-time scaling eq.(23) for the open case. Good scaling is obtained with 1 0.8 0.6 c(t) 0.4 0.2 0 0.001 0.01 0.1 1 10 t (T-0.21) 2 Figure 14: Finite-time scaling eq.(23) for the open case. Good scaling is obtained with T G 0:21 0:02 and 2. 32 1 0.8 0.6 c(t) 0.4 0.2 0 0.01 0.1

More information

Hill climbing: Simulated annealing and Tabu search

Hill climbing: Simulated annealing and Tabu search Hill climbing: Simulated annealing and Tabu search Heuristic algorithms Giovanni Righini University of Milan Department of Computer Science (Crema) Hill climbing Instead of repeating local search, it is

More information

Stochastic renormalization group in percolation: I. uctuations and crossover

Stochastic renormalization group in percolation: I. uctuations and crossover Physica A 316 (2002) 29 55 www.elsevier.com/locate/physa Stochastic renormalization group in percolation: I. uctuations and crossover Martin Z. Bazant Department of Mathematics, Massachusetts Institute

More information

Fundamentals of Metaheuristics

Fundamentals of Metaheuristics Fundamentals of Metaheuristics Part I - Basic concepts and Single-State Methods A seminar for Neural Networks Simone Scardapane Academic year 2012-2013 ABOUT THIS SEMINAR The seminar is divided in three

More information

12. LOCAL SEARCH. gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria

12. LOCAL SEARCH. gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria 12. LOCAL SEARCH gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria Lecture slides by Kevin Wayne Copyright 2005 Pearson-Addison Wesley h ttp://www.cs.princeton.edu/~wayne/kleinberg-tardos

More information

Computing the acceptability semantics. London SW7 2BZ, UK, Nicosia P.O. Box 537, Cyprus,

Computing the acceptability semantics. London SW7 2BZ, UK, Nicosia P.O. Box 537, Cyprus, Computing the acceptability semantics Francesca Toni 1 and Antonios C. Kakas 2 1 Department of Computing, Imperial College, 180 Queen's Gate, London SW7 2BZ, UK, ft@doc.ic.ac.uk 2 Department of Computer

More information

Algorithms and Complexity theory

Algorithms and Complexity theory Algorithms and Complexity theory Thibaut Barthelemy Some slides kindly provided by Fabien Tricoire University of Vienna WS 2014 Outline 1 Algorithms Overview How to write an algorithm 2 Complexity theory

More information

Statistical Analysis of Backtracking on. Inconsistent CSPs? Irina Rish and Daniel Frost.

Statistical Analysis of Backtracking on. Inconsistent CSPs? Irina Rish and Daniel Frost. Statistical Analysis of Backtracking on Inconsistent CSPs Irina Rish and Daniel Frost Department of Information and Computer Science University of California, Irvine, CA 92697-3425 firinar,frostg@ics.uci.edu

More information

Quantum Annealing in spin glasses and quantum computing Anders W Sandvik, Boston University

Quantum Annealing in spin glasses and quantum computing Anders W Sandvik, Boston University PY502, Computational Physics, December 12, 2017 Quantum Annealing in spin glasses and quantum computing Anders W Sandvik, Boston University Advancing Research in Basic Science and Mathematics Example:

More information

6 Markov Chain Monte Carlo (MCMC)

6 Markov Chain Monte Carlo (MCMC) 6 Markov Chain Monte Carlo (MCMC) The underlying idea in MCMC is to replace the iid samples of basic MC methods, with dependent samples from an ergodic Markov chain, whose limiting (stationary) distribution

More information

Statistical Complexity of Simple 1D Spin Systems

Statistical Complexity of Simple 1D Spin Systems Statistical Complexity of Simple 1D Spin Systems James P. Crutchfield Physics Department, University of California, Berkeley, CA 94720-7300 and Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, NM 87501

More information

Isotropy and Metastable States: The Landscape of the XY Hamiltonian Revisited

Isotropy and Metastable States: The Landscape of the XY Hamiltonian Revisited Isotropy and Metastable States: The Landscape of the Y Hamiltonian Revisited Ricardo Garcia-Pelayo Peter F. Stadler SFI WORKING PAPER: 1996-05-034 SFI Working Papers contain accounts of scientific work

More information

Effects of Neutral Selection on the Evolution of Molecular Species

Effects of Neutral Selection on the Evolution of Molecular Species Effects of Neutral Selection on the Evolution of Molecular Species M. E. J. Newman Robin Engelhardt SFI WORKING PAPER: 1998-01-001 SFI Working Papers contain accounts of scientific work of the author(s)

More information

Stochastic Learning in a Neural Network with Adapting. Synapses. Istituto Nazionale di Fisica Nucleare, Sezione di Bari

Stochastic Learning in a Neural Network with Adapting. Synapses. Istituto Nazionale di Fisica Nucleare, Sezione di Bari Stochastic Learning in a Neural Network with Adapting Synapses. G. Lattanzi 1, G. Nardulli 1, G. Pasquariello and S. Stramaglia 1 Dipartimento di Fisica dell'universita di Bari and Istituto Nazionale di

More information

Hertz, Krogh, Palmer: Introduction to the Theory of Neural Computation. Addison-Wesley Publishing Company (1991). (v ji (1 x i ) + (1 v ji )x i )

Hertz, Krogh, Palmer: Introduction to the Theory of Neural Computation. Addison-Wesley Publishing Company (1991). (v ji (1 x i ) + (1 v ji )x i ) Symmetric Networks Hertz, Krogh, Palmer: Introduction to the Theory of Neural Computation. Addison-Wesley Publishing Company (1991). How can we model an associative memory? Let M = {v 1,..., v m } be a

More information

<f> Generation t. <f> Generation t

<f> Generation t. <f> Generation t Finite Populations Induce Metastability in Evolutionary Search Erik van Nimwegen y James P. Crutcheld, yz Melanie Mitchell y y Santa Fe Institute, 99 Hyde Park Road, Santa Fe, NM 8750 z Physics Department,

More information

Nonextensive Aspects of Self- Organized Scale-Free Gas-Like Networks

Nonextensive Aspects of Self- Organized Scale-Free Gas-Like Networks Nonextensive Aspects of Self- Organized Scale-Free Gas-Like Networks Stefan Thurner Constantino Tsallis SFI WORKING PAPER: 5-6-6 SFI Working Papers contain accounts of scientific work of the author(s)

More information

An exploration of matrix equilibration

An exploration of matrix equilibration An exploration of matrix equilibration Paul Liu Abstract We review three algorithms that scale the innity-norm of each row and column in a matrix to. The rst algorithm applies to unsymmetric matrices,

More information

CMOS Ising Computer to Help Optimize Social Infrastructure Systems

CMOS Ising Computer to Help Optimize Social Infrastructure Systems FEATURED ARTICLES Taking on Future Social Issues through Open Innovation Information Science for Greater Industrial Efficiency CMOS Ising Computer to Help Optimize Social Infrastructure Systems As the

More information

The Phase Transition of the 2D-Ising Model

The Phase Transition of the 2D-Ising Model The Phase Transition of the 2D-Ising Model Lilian Witthauer and Manuel Dieterle Summer Term 2007 Contents 1 2D-Ising Model 2 1.1 Calculation of the Physical Quantities............... 2 2 Location of the

More information

Study of Phase Transition in Pure Zirconium using Monte Carlo Simulation

Study of Phase Transition in Pure Zirconium using Monte Carlo Simulation Study of Phase Transition in Pure Zirconium using Monte Carlo Simulation Wathid Assawasunthonnet, Abhinav Jain Department of Physics University of Illinois assawas1@illinois.edu Urbana, IL Abstract NPT

More information

Dual Monte Carlo and Cluster Algorithms. N. Kawashima and J.E. Gubernatis. Abstract. theory and not from the usual viewpoint of a particular model.

Dual Monte Carlo and Cluster Algorithms. N. Kawashima and J.E. Gubernatis. Abstract. theory and not from the usual viewpoint of a particular model. Phys. Rev. E, to appear. LA-UR-94-6 Dual Monte Carlo and Cluster Algorithms N. Kawashima and J.E. Gubernatis Center for Nonlinear Studies and Theoretical Division Los Alamos National Laboratory, Los Alamos,

More information

Methods for finding optimal configurations

Methods for finding optimal configurations S 2710 oundations of I Lecture 7 Methods for finding optimal configurations Milos Hauskrecht milos@pitt.edu 5329 Sennott Square S 2710 oundations of I Search for the optimal configuration onstrain satisfaction

More information

Lecture 4: Simulated Annealing. An Introduction to Meta-Heuristics, Produced by Qiangfu Zhao (Since 2012), All rights reserved

Lecture 4: Simulated Annealing. An Introduction to Meta-Heuristics, Produced by Qiangfu Zhao (Since 2012), All rights reserved Lecture 4: Simulated Annealing Lec04/1 Neighborhood based search Step 1: s=s0; Step 2: Stop if terminating condition satisfied; Step 3: Generate N(s); Step 4: s =FindBetterSolution(N(s)); Step 5: s=s ;

More information