( ) ( ) ( ) ( ) Simulated Annealing. Introduction. Pseudotemperature, Free Energy and Entropy. A Short Detour into Statistical Mechanics.
|
|
- Erin Stevenson
- 6 years ago
- Views:
Transcription
1 Aims Reference Keywords Plan Simulated Annealing to obtain a mathematical framework for stochastic machines to study simulated annealing Parts of chapter of Haykin, S., Neural Networks: A Comprehensive Foundation, Prentice-Hall, 999, and Neural Networks and Learning Machines, Prentice-Hall, 009. temperature, annealing schedule, Metropolis algorithm, combinatorial optimization, energy function, move set, mean-field annealing, quadratic assignment problem, cost function, critical temperature statistical mechanics Metropolis algorithm annealing schedule travelling salesperson problem energy function move sets for TSP mean-field annealing critical temperature Introduction Because (industrial strength) neural networks may have thousands of degrees of freedom (e.g. weights) it is possible to get inspiration from the theory of statistical mechanics. Statistical mechanics deals with the macroscopic equilibrium properties of large systems of elements that are subject to the microscopic laws of mechanics. Simulated annealing is an optimization technique that uses a thermodynamic metaphor. Simulated Annealing Notes Bill Wilson, 03 Simulated Annealing Notes Bill Wilson, 03 A Short Detour into Statistical Mechanics Consider a system with many degrees of freedom that can be in many possible states. Let p i denote the probability of state i. Then i p i =. Let E i denote the energy of the system in state i. Statistical mechanics says that when a physical system is in thermal equilibrium with its environment, state i occurs with probability E p = exp i i Z kbt where T is temperature (Kelvin scale), k B is Boltzmann's constant, and Z is a constant. As i p i =, we can derive E Z = exp i i kbt The probability distribution of.3 is called the Gibbs distribution and the factor exp( E i /(k B T)) is called the Boltzmann factor. Note from.3 that lower energy or higher temperature higher probability. As T is reduced, the probability is concentrated in a smaller subset of low-energy states. (.3) (.4) Pseudotemperature, Free Energy and Entropy We can mimic this setup in a neural net context using a concept of pseudotemperature T. As T has no scale (like Kelvin) we don't need an analog of k B, so we can write Ei p = exp i Z T Ei and Z = exp i T Notice that if T = Z =, then E i = log e p i, so log e p i measures something like energy. The Helmholtz free energy, F, of a system, is defined as (.5/) F = T log e Z (.) The average energy E of the system is defined by E = i p i E i (.8) Simulated Annealing Notes Bill Wilson, 03 3 Simulated Annealing Notes Bill Wilson, 03 4
2 Entropy Using.5 to.8, we derive E F = T i p i log e p i (.9) The RHS (except for the factor T) is called the entropy H of the system: H = i p i log e p i (.) so E F = TH or F = E TH (.) Entropy and Thermal Equilibrium When two systems A and A' come into contact, the total entropy of the two systems tends to increase: H + H' 0 In view of., (F =! TH) this means that the free energy of the system, F, tends to decrease, reaching a minimum when the two systems reach thermal equilibrium: The minimum of the free energy of a stochastic system with respect to the variables of the system is achieved at thermal equilibrium, at which point the system is governed by Gibbs' distribution. To see this, turn.5 inside out to obtain E i = T(log e (p i )+log e Z). Thus ( + log e Z) E = i p i E i = T i p i log e p i = T i p i log e ( p i ) T log e Z i p i = T i p i log e ( p i ) T log e Z = T i p i log e ( p i ) + F, so E F = T i p i log e ( p i ), as claimed. Simulated Annealing Notes Bill Wilson, 03 5 Simulated Annealing Notes Bill Wilson, 03 Simulated Annealing Simulated annealing is an optimization technique. In Hopfield nets, local minima are used in a positive way, but in optimization problems, local minima get in the way: one must have a way to escape from them. The two ideas of simulated annealing are as follows:. When optimizing a very large and complex system (i.e., a system with many degrees of freedom), instead of always going downhill, try to go downhill most of the time.. Initially, the probability of going uphill should be relatively high ( high temperature ) but as time (iterations) go on, this probability should decrease (the temperature decreases according to an annealing schedule). The term annealing comes from the technique of hardening a metal (i.e. finding a state of its crystalline lattice that is highly packed) by hammering it while initially very hot and then at a succession of decreasing temperatures. Kirkpatrick, S., Gelatt, C.D., and Vecchi, M.P. (983) Optimization by simulated annealing, Science 0, -80. Metropolis Algorithm The algorithm for simulated annealing is a variant (with time-dependent temperature) of the Metropolis 3 algorithm. In each step of this algorithm, a unit of the system is subjected to a small random displacement (or transition or flip), and the resulting change E in the energy of the system is computed. If E 0, the displacement is accepted. If E > 0, the algorithm proceeds in a probabilistic manner: the probability that the displacement will be accepted is p.exp( E/T) where p is a constant and T is the temperature. If T is large, exp( E/T) approaches. Thus p is the probability that a transition to a higher energy state will be accepted when the temperature is infinite. The use of the expression exp( E/T) ensures that at thermal equilibrium the Boltzmann distribution of states prevails 4. This in turn ensures that, at high temperatures, all states have equal probability of occurring, while as T 0, only the states with minimum energy have a non-zero probability of occurrence. 3 Metropolis, N., Rosenbluth, A., Rosenbluth, M., Teller, A., and Teller, E. (953) Equations of state calculations by fast computing machines, Journal of Chemical Physics, Boltzmann, L. (8) Weitere studien über das Wärmegleichgewicht unter gasmolekülen, Sitzungsberichte der Mathematischen-Naturwissenschaftlichen Classe der Kaiserlichen Akademie der Wissenschaften Simulated Annealing Notes Bill Wilson, 03 Simulated Annealing Notes Bill Wilson, 03 8
3 Design of the Annealing Schedule Initial value of the temperature: T 0 is chosen high enough to ensure that virtually all proposed transitions are accepted by the simulated annealing algorithm; Decrement: Usually, a geometric progression of temperatures is used: T k = α.t k, where α is a constant slightly less than (e.g ). At each temperature, enough transitions are attempted that either there are accepted transitions per unit on the average, or the number of attempts exceeds 0 times the number of units; Final value of the temperature: The system is frozen and annealing stops if the desired number of acceptances is not achieved at three successive temperatures. Simulated Annealing for Combinatorial Optimization Simulated annealing is well-suited for solving combinatorial optimization problems. Solutions (or states corresponding to possible solutions) are the states of the system, and the energy function is a function giving the cost of a solution. Kirkpatrick et al. applied their methods to the Travelling Salesperson problem, finding nearoptimal solutions for up to 000 sites, using α = 0.9. In order to apply simulated annealing to such a problem, it is necessary to have a set of neurons and an energy function. The figure below illustrates the neural layout and its interpretation. sequence cities Neural configuration for a tour of 5 cities C-C5, in the order C C4 C C3 C5 C. Simulated Annealing Notes Bill Wilson, 03 9 Simulated Annealing Notes Bill Wilson, 03 Energy Function for TSP Constraints on the neural activations (for a solution) include that there should be: exactly one neuron "on" (activation ) in each row (i.e. each city visited exactly once) exactly one neuron on, in each column (salesperson only visits one city at a time!). Let us write v ij for the activation level of the neuron in row i and column j. One constraint can be expressed as saying that we want to minimize e j = (v j +v j +v 3j +v 4j +v 5j ) = ( i v ij ) That's for column j. Taking into account all rows, we want to minimize E = j e j = j ( i v ij ) Similarly, taking into account all rows, we want to minimize E = i ( j v ij ) Objective Function An optimization problem, of course, comes with an objective function to be minimized. In our case, suppose that d ij is the distance from city i to city j. Suppose that cities and are adjacent on the salesperson's tour, and that city is the m th city visited. Then city must be either m st or m+ st on the tour, and the contribution to the total distance travelled will be d, (v,m v,m + v,m v,m+ ) Remember that in a solution, if v,m =, then v,m+ = 0, and vice-versa. Generalizing this, for the total distance travelled we get the objective function E 3 = 0.5 k i j d ij v ik (v j,k + v j,k+ ) To minimize E, E, and E 3 simultaneously, we minimize E = k E + k E + k 3 E 3 where the k i are positive constants. The move set (next slide) maintains the constraint that each v ij = 0 or. Simulated Annealing Notes Bill Wilson, 03 Simulated Annealing Notes Bill Wilson, 03
4 Move Set for Simulated Annealing inversion,,8,9 is replaced by 9,8,, translation Remove a section (-8) & replace it between two other consecutive cities (in this case 4 and 5). switching Select two non-consecutive cities (in this case and ) and switch them in the tour. Mean-Field Annealing Simulated annealing can be slow, and the annealing schedule can be part of the problem. In some cases it is found that most of the crystalization of the system takes place around a particular temperature, termed the critical temperature. Fang, Wilson and Li 5 tackled the quadratic assignment problem exploiting this fact. Matlab code for simulated annealing is available in tsp.m, available at the "Other Matlab code" link under Software Availability on the class home page. Reference for parts of this material: Neuro-fuzzy and Soft Computing by JSR Jang, CT Sun, and E Mizutani, Prentice-Hall, 99, page Luyuan Fang, William H. Wilson, and Tao Li (990) Mean field annealing neural nets for quadratic assignment, INNC-90-PARIS Proceedings, : 8-8 Simulated Annealing Notes Bill Wilson, 03 3 Simulated Annealing Notes Bill Wilson, 03 4 The Quadratic Assignment Problem Consider the optimal location of m plants at n possible sites, where n m, in the following situation: The amount of goods to be transported between plants is given. There is a cost associated with moving goods between sites. 3 3 goods flows The goal is to allocate plants to sites so as to minimise total cost. Terminology: x ik = if plant k is located at site i - only one plant per site. 0 x ik c ij is the cost of transporting unit of goods from site i to site j. c ij 0. transport cost/unit 3 d kl is the amount of goods to be transported from plant k to plant l. d kl 0. Cost Function: f(x) = i=..n j i k=..m l k c ij d kl x ik x jl Minimising this function is an NP-hard problem. Simulated Annealing Notes Bill Wilson, 03 5 Simulated Annealing Notes Bill Wilson, 03
5 Neural architecture for this problem We choose a two-dimensional array of neurons, of dimension m n. x ij represents the state of the i,j-th neuron: x ij = plant j is assigned to site i. Two constraints on the x ij in a solution: n one plant per site x ij = for each plant j i= m every plant must be located at exactly one site x ij for every i j= Energy Function: n m n m m $ E = ( A ) c ij d kl x ik x jl + ( B ) x ik x il + C & i=j i k=l k i=k= l k k=% n x ik where A, B, and C are constants. The energy function is minimised if the constraints are satisfied & the cost function is minimised. i= ' ) ( Critical Temperature Phenomenon: when temperature T is very high, the network reaches an equilibrium point where all the neurons have similar activation values near 0.5; as T is decreased, this point is also lowered; at a certain temperature T c, (the critical temperature), this point drops down to δ which depends on the parameters A, B, C and t. This is the lowest equilibrium point at which all neurons have similar activation values. Behaviour below T c as the temperature drops below T c, the neuron activations diverge rapidly towards 0 and ; when the temperature becomes very low, the network settles into a stable state which represents a feasible solution to the problem. The neuron activation values do not diverge until the critical temperature is reached. Near the critical temperature, the neuron activations rapidly diverge towards the two extreme points 0 and. Below T c, neuron activations again remain relatively stable. Simulated Annealing Notes Bill Wilson, 03 Simulated Annealing Notes Bill Wilson, 03 8 Critical Temperature Estimate Let m c and m d be the mean values of c aj and d bl. The lowest equilibrium point δ is estimated as δ C/(A(n )(m )m c m d + B(m ) + nc) The expected critical temperature is estimated as T c = 0.5* t ( δ)[a(m )(n )m c m d δ + B(m )δ + C(nδ )] Annealing Schedule for Mean-Field Annealing: At T = t( δ)[a(m )(n )c max d max δ + B(m )δ + C(nδ )], simulate until equilibrium. Around T c, (between T max and T min ) the temperature changes according to ΔT = K T T c ( T max T min ) +ε where T is the present temperature and ε and K are constants. At each temperature, iterate s steps (where s is large enough to guarantee reaching equilibrium at a temperature above the actual critical temperature). At a low temperature near 0, simulate until equilibrium. Simulated Annealing Notes Bill Wilson, 03 9 Simulated Annealing Notes Bill Wilson, 03 0
6 Simulation Results run number optimal mean field meanfield/optimal Results for 5 5 Data: Mean Field Network. Based on randomly generated 5 5 quadratic assignment problems. Critical Temperature Plot for 5 by 5 Mean Field Network Simulated Annealing Notes Bill Wilson, 03 Simulated Annealing Notes Bill Wilson, 03
Motivation, Basic Concepts, Basic Methods, Travelling Salesperson Problem (TSP), Algorithms
Motivation, Basic Concepts, Basic Methods, Travelling Salesperson Problem (TSP), Algorithms 1 What is Combinatorial Optimization? Combinatorial Optimization deals with problems where we have to search
More informationSIMU L TED ATED ANNEA L NG ING
SIMULATED ANNEALING Fundamental Concept Motivation by an analogy to the statistical mechanics of annealing in solids. => to coerce a solid (i.e., in a poor, unordered state) into a low energy thermodynamic
More information5. Simulated Annealing 5.1 Basic Concepts. Fall 2010 Instructor: Dr. Masoud Yaghini
5. Simulated Annealing 5.1 Basic Concepts Fall 2010 Instructor: Dr. Masoud Yaghini Outline Introduction Real Annealing and Simulated Annealing Metropolis Algorithm Template of SA A Simple Example References
More information7.1 Basis for Boltzmann machine. 7. Boltzmann machines
7. Boltzmann machines this section we will become acquainted with classical Boltzmann machines which can be seen obsolete being rarely applied in neurocomputing. It is interesting, after all, because is
More informationChapter 11. Stochastic Methods Rooted in Statistical Mechanics
Chapter 11. Stochastic Methods Rooted in Statistical Mechanics Neural Networks and Learning Machines (Haykin) Lecture Notes on Self-learning Neural Algorithms Byoung-Tak Zhang School of Computer Science
More information12. LOCAL SEARCH. gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria
12. LOCAL SEARCH gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria Lecture slides by Kevin Wayne Copyright 2005 Pearson-Addison Wesley h ttp://www.cs.princeton.edu/~wayne/kleinberg-tardos
More informationSingle Solution-based Metaheuristics
Parallel Cooperative Optimization Research Group Single Solution-based Metaheuristics E-G. Talbi Laboratoire d Informatique Fondamentale de Lille Single solution-based metaheuristics Improvement of a solution.
More informationHertz, Krogh, Palmer: Introduction to the Theory of Neural Computation. Addison-Wesley Publishing Company (1991). (v ji (1 x i ) + (1 v ji )x i )
Symmetric Networks Hertz, Krogh, Palmer: Introduction to the Theory of Neural Computation. Addison-Wesley Publishing Company (1991). How can we model an associative memory? Let M = {v 1,..., v m } be a
More informationSimulated Annealing. 2.1 Introduction
Simulated Annealing 2 This chapter is dedicated to simulated annealing (SA) metaheuristic for optimization. SA is a probabilistic single-solution-based search method inspired by the annealing process in
More informationSpin Glas Dynamics and Stochastic Optimization Schemes. Karl Heinz Hoffmann TU Chemnitz
Spin Glas Dynamics and Stochastic Optimization Schemes Karl Heinz Hoffmann TU Chemnitz 1 Spin Glasses spin glass e.g. AuFe AuMn CuMn nobel metal (no spin) transition metal (spin) 0.1-10 at% ferromagnetic
More informationSYSTEMS SCIENCE AND CYBERNETICS Vol. III Simulated Annealing: From Statistical Thermodynamics to Combinatory Problems Solving - D.
SIMULATED ANNEALING: FROM STATISTICAL THERMODYNAMICS TO COMBINATORY PROBLEMS SOLVING D. Thiel ENITIAA, Nantes, France Keywords: Combinatory Problem, Optimizing Problem, Global Search Method, Statistical
More informationStochastic Networks Variations of the Hopfield model
4 Stochastic Networks 4. Variations of the Hopfield model In the previous chapter we showed that Hopfield networks can be used to provide solutions to combinatorial problems that can be expressed as the
More information12. LOCAL SEARCH. gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria
Coping With NP-hardness Q. Suppose I need to solve an NP-hard problem. What should I do? A. Theory says you re unlikely to find poly-time algorithm. Must sacrifice one of three desired features. Solve
More informationCS 781 Lecture 9 March 10, 2011 Topics: Local Search and Optimization Metropolis Algorithm Greedy Optimization Hopfield Networks Max Cut Problem Nash
CS 781 Lecture 9 March 10, 2011 Topics: Local Search and Optimization Metropolis Algorithm Greedy Optimization Hopfield Networks Max Cut Problem Nash Equilibrium Price of Stability Coping With NP-Hardness
More informationIntroduction to Simulated Annealing 22c:145
Introduction to Simulated Annealing 22c:145 Simulated Annealing Motivated by the physical annealing process Material is heated and slowly cooled into a uniform structure Simulated annealing mimics this
More informationLocal Search & Optimization
Local Search & Optimization CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2018 Soleymani Artificial Intelligence: A Modern Approach, 3 rd Edition, Chapter 4 Some
More informationLocal Search & Optimization
Local Search & Optimization CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2017 Soleymani Artificial Intelligence: A Modern Approach, 3 rd Edition, Chapter 4 Outline
More informationMotivation. Lecture 23. The Stochastic Neuron ( ) Properties of Logistic Sigmoid. Logistic Sigmoid With Varying T. Logistic Sigmoid T = 0.
Motivation Lecture 23 Idea: with low probability, go against the local field move up the energy surface make the wrong microdecision Potential value for optimization: escape from local optima Potential
More informationCOMP9444 Neural Networks and Deep Learning 11. Boltzmann Machines. COMP9444 c Alan Blair, 2017
COMP9444 Neural Networks and Deep Learning 11. Boltzmann Machines COMP9444 17s2 Boltzmann Machines 1 Outline Content Addressable Memory Hopfield Network Generative Models Boltzmann Machine Restricted Boltzmann
More informationHopfield networks. Lluís A. Belanche Soft Computing Research Group
Lluís A. Belanche belanche@lsi.upc.edu Soft Computing Research Group Dept. de Llenguatges i Sistemes Informàtics (Software department) Universitat Politècnica de Catalunya 2010-2011 Introduction Content-addressable
More information6. APPLICATION TO THE TRAVELING SALESMAN PROBLEM
6. Application to the Traveling Salesman Problem 92 6. APPLICATION TO THE TRAVELING SALESMAN PROBLEM The properties that have the most significant influence on the maps constructed by Kohonen s algorithm
More informationOptimization Methods via Simulation
Optimization Methods via Simulation Optimization problems are very important in science, engineering, industry,. Examples: Traveling salesman problem Circuit-board design Car-Parrinello ab initio MD Protein
More informationPROBLEM SOLVING AND SEARCH IN ARTIFICIAL INTELLIGENCE
Artificial Intelligence, Computational Logic PROBLEM SOLVING AND SEARCH IN ARTIFICIAL INTELLIGENCE Lecture 4 Metaheuristic Algorithms Sarah Gaggl Dresden, 5th May 2017 Agenda 1 Introduction 2 Constraint
More informationAnt Colony Optimization: an introduction. Daniel Chivilikhin
Ant Colony Optimization: an introduction Daniel Chivilikhin 03.04.2013 Outline 1. Biological inspiration of ACO 2. Solving NP-hard combinatorial problems 3. The ACO metaheuristic 4. ACO for the Traveling
More informationHopfield Networks and Boltzmann Machines. Christian Borgelt Artificial Neural Networks and Deep Learning 296
Hopfield Networks and Boltzmann Machines Christian Borgelt Artificial Neural Networks and Deep Learning 296 Hopfield Networks A Hopfield network is a neural network with a graph G = (U,C) that satisfies
More informationOptimization Prof. A. Goswami Department of Mathematics Indian Institute of Technology, Kharagpur. Lecture - 20 Travelling Salesman Problem
Optimization Prof. A. Goswami Department of Mathematics Indian Institute of Technology, Kharagpur Lecture - 20 Travelling Salesman Problem Today we are going to discuss the travelling salesman problem.
More information6 Markov Chain Monte Carlo (MCMC)
6 Markov Chain Monte Carlo (MCMC) The underlying idea in MCMC is to replace the iid samples of basic MC methods, with dependent samples from an ergodic Markov chain, whose limiting (stationary) distribution
More information22c:145 Artificial Intelligence
22c:145 Artificial Intelligence Fall 2005 Informed Search and Exploration III Cesare Tinelli The University of Iowa Copyright 2001-05 Cesare Tinelli and Hantao Zhang. a a These notes are copyrighted material
More informationSelf-organising Systems 2 Simulated Annealing and Boltzmann Machines
Ams Reference Keywords Plan Self-organsng Systems Smulated Annealng and Boltzmann Machnes to obtan a mathematcal framework for stochastc machnes to study smulated annealng to study the Boltzmann machne
More informationComputational statistics
Computational statistics Combinatorial optimization Thierry Denœux February 2017 Thierry Denœux Computational statistics February 2017 1 / 37 Combinatorial optimization Assume we seek the maximum of f
More informationMarkov Chain Monte Carlo. Simulated Annealing.
Aula 10. Simulated Annealing. 0 Markov Chain Monte Carlo. Simulated Annealing. Anatoli Iambartsev IME-USP Aula 10. Simulated Annealing. 1 [RC] Stochastic search. General iterative formula for optimizing
More informationHeuristic Optimisation
Heuristic Optimisation Part 8: Simulated annealing Sándor Zoltán Németh http://web.mat.bham.ac.uk/s.z.nemeth s.nemeth@bham.ac.uk University of Birmingham S Z Németh (s.nemeth@bham.ac.uk) Heuristic Optimisation
More informationHill climbing: Simulated annealing and Tabu search
Hill climbing: Simulated annealing and Tabu search Heuristic algorithms Giovanni Righini University of Milan Department of Computer Science (Crema) Hill climbing Instead of repeating local search, it is
More informationLocal search algorithms. Chapter 4, Sections 3 4 1
Local search algorithms Chapter 4, Sections 3 4 Chapter 4, Sections 3 4 1 Outline Hill-climbing Simulated annealing Genetic algorithms (briefly) Local search in continuous spaces (very briefly) Chapter
More informationNeural Nets and Symbolic Reasoning Hopfield Networks
Neural Nets and Symbolic Reasoning Hopfield Networks Outline The idea of pattern completion The fast dynamics of Hopfield networks Learning with Hopfield networks Emerging properties of Hopfield networks
More informationWang-Landau Monte Carlo simulation. Aleš Vítek IT4I, VP3
Wang-Landau Monte Carlo simulation Aleš Vítek IT4I, VP3 PART 1 Classical Monte Carlo Methods Outline 1. Statistical thermodynamics, ensembles 2. Numerical evaluation of integrals, crude Monte Carlo (MC)
More informationNeural Networks. Hopfield Nets and Auto Associators Fall 2017
Neural Networks Hopfield Nets and Auto Associators Fall 2017 1 Story so far Neural networks for computation All feedforward structures But what about.. 2 Loopy network Θ z = ቊ +1 if z > 0 1 if z 0 y i
More informationSummary. AIMA sections 4.3,4.4. Hill-climbing Simulated annealing Genetic algorithms (briey) Local search in continuous spaces (very briey)
AIMA sections 4.3,4.4 Summary Hill-climbing Simulated annealing Genetic (briey) in continuous spaces (very briey) Iterative improvement In many optimization problems, path is irrelevant; the goal state
More informationA.I.: Beyond Classical Search
A.I.: Beyond Classical Search Random Sampling Trivial Algorithms Generate a state randomly Random Walk Randomly pick a neighbor of the current state Both algorithms asymptotically complete. Overview Previously
More informationCh5. Markov Chain Monte Carlo
ST4231, Semester I, 2003-2004 Ch5. Markov Chain Monte Carlo In general, it is very difficult to simulate the value of a random vector X whose component random variables are dependent. In this chapter we
More informationMonte Carlo Simulations of Protein Folding using Lattice Models
Monte Carlo Simulations of Protein Folding using Lattice Models Ryan Cheng 1,2 and Kenneth Jordan 1,3 1 Bioengineering and Bioinformatics Summer Institute, Department of Computational Biology, University
More informationIntroduction to the Renormalization Group
Introduction to the Renormalization Group Gregory Petropoulos University of Colorado Boulder March 4, 2015 1 / 17 Summary Flavor of Statistical Physics Universality / Critical Exponents Ising Model Renormalization
More informationChapter 4 Beyond Classical Search 4.1 Local search algorithms and optimization problems
Chapter 4 Beyond Classical Search 4.1 Local search algorithms and optimization problems CS4811 - Artificial Intelligence Nilufer Onder Department of Computer Science Michigan Technological University Outline
More informationOn Markov Chain Monte Carlo
MCMC 0 On Markov Chain Monte Carlo Yevgeniy Kovchegov Oregon State University MCMC 1 Metropolis-Hastings algorithm. Goal: simulating an Ω-valued random variable distributed according to a given probability
More informationZebo Peng Embedded Systems Laboratory IDA, Linköping University
TDTS 01 Lecture 8 Optimization Heuristics for Synthesis Zebo Peng Embedded Systems Laboratory IDA, Linköping University Lecture 8 Optimization problems Heuristic techniques Simulated annealing Genetic
More informationIn biological terms, memory refers to the ability of neural systems to store activity patterns and later recall them when required.
In biological terms, memory refers to the ability of neural systems to store activity patterns and later recall them when required. In humans, association is known to be a prominent feature of memory.
More informationLin-Kernighan Heuristic. Simulated Annealing
DM63 HEURISTICS FOR COMBINATORIAL OPTIMIZATION Lecture 6 Lin-Kernighan Heuristic. Simulated Annealing Marco Chiarandini Outline 1. Competition 2. Variable Depth Search 3. Simulated Annealing DM63 Heuristics
More informationSimulated Annealing SURFACE. Syracuse University
Syracuse University SURFACE Electrical Engineering and Computer Science Technical Reports College of Engineering and Computer Science 6-1992 Simulated Annealing Per Brinch Hansen Syracuse University, School
More informationThe Traveling Salesman Problem: A Neural Network Perspective. Jean-Yves Potvin
1 The Traveling Salesman Problem: A Neural Network Perspective Jean-Yves Potvin Centre de Recherche sur les Transports Université de Montréal C.P. 6128, Succ. A, Montréal (Québec) Canada H3C 3J7 potvin@iro.umontreal.ca
More informationCS 380: ARTIFICIAL INTELLIGENCE
CS 380: ARTIFICIAL INTELLIGENCE PROBLEM SOLVING: LOCAL SEARCH 10/11/2013 Santiago Ontañón santi@cs.drexel.edu https://www.cs.drexel.edu/~santi/teaching/2013/cs380/intro.html Recall: Problem Solving Idea:
More informationNeural Networks for Machine Learning. Lecture 11a Hopfield Nets
Neural Networks for Machine Learning Lecture 11a Hopfield Nets Geoffrey Hinton Nitish Srivastava, Kevin Swersky Tijmen Tieleman Abdel-rahman Mohamed Hopfield Nets A Hopfield net is composed of binary threshold
More informationUsing a Hopfield Network: A Nuts and Bolts Approach
Using a Hopfield Network: A Nuts and Bolts Approach November 4, 2013 Gershon Wolfe, Ph.D. Hopfield Model as Applied to Classification Hopfield network Training the network Updating nodes Sequencing of
More informationSimulated Annealing. Local Search. Cost function. Solution space
Simulated Annealing Hill climbing Simulated Annealing Local Search Cost function? Solution space Annealing Annealing is a thermal process for obtaining low energy states of a solid in a heat bath. The
More informationFundamentals of Metaheuristics
Fundamentals of Metaheuristics Part I - Basic concepts and Single-State Methods A seminar for Neural Networks Simone Scardapane Academic year 2012-2013 ABOUT THIS SEMINAR The seminar is divided in three
More informationMicrocanonical Mean Field Annealing: A New Algorithm for Increasing the Convergence Speed of Mean Field Annealing.
Microcanonical Mean Field Annealing: A New Algorithm for Increasing the Convergence Speed of Mean Field Annealing. Hyuk Jae Lee and Ahmed Louri Department of Electrical and Computer Engineering University
More informationDesign and Analysis of Algorithms
CSE 0, Winter 08 Design and Analysis of Algorithms Lecture 8: Consolidation # (DP, Greed, NP-C, Flow) Class URL: http://vlsicad.ucsd.edu/courses/cse0-w8/ Followup on IGO, Annealing Iterative Global Optimization
More informationCS6999 Probabilistic Methods in Integer Programming Randomized Rounding Andrew D. Smith April 2003
CS6999 Probabilistic Methods in Integer Programming Randomized Rounding April 2003 Overview 2 Background Randomized Rounding Handling Feasibility Derandomization Advanced Techniques Integer Programming
More informationMICROCANONICAL OPTIMIZATION APPLIED TO THE TRAVELING SALESMAN PROBLEM
International Journal of Modern Physics C, Vol. 9, No. 1 (1998) 133 146 c World Scientific Publishing Company MICROCANONICAL OPTIMIZATION APPLIED TO THE TRAVELING SALESMAN PROBLEM ALEXANDRE LINHARES Computação
More informationLocal and Stochastic Search
RN, Chapter 4.3 4.4; 7.6 Local and Stochastic Search Some material based on D Lin, B Selman 1 Search Overview Introduction to Search Blind Search Techniques Heuristic Search Techniques Constraint Satisfaction
More informationArtificial Intelligence Heuristic Search Methods
Artificial Intelligence Heuristic Search Methods Chung-Ang University, Jaesung Lee The original version of this content is created by School of Mathematics, University of Birmingham professor Sandor Zoltan
More informationOPTIMIZATION OF STEEL PLANE TRUSS MEMBERS CROSS SECTIONS WITH SIMULATED ANNEALING METHOD
OPTIMIZATION OF STEEL PLANE TRUSS MEMBERS CROSS SECTIONS WITH SIMULATED ANNEALING METHOD Snezana Mitrovic College of Civil Engineering and Geodesy, Belgrade, Serbia mitrozs@sezampro.rs Goran Cirovic College
More informationA Structural Matching Algorithm Using Generalized Deterministic Annealing
A Structural Matching Algorithm Using Generalized Deterministic Annealing Laura Davlea Institute for Research in Electronics Lascar Catargi 22 Iasi 6600, Romania )QEMPPHEZPIE$QEMPHRXMWVS Abstract: We present
More informationLocal search algorithms. Chapter 4, Sections 3 4 1
Local search algorithms Chapter 4, Sections 3 4 Chapter 4, Sections 3 4 1 Outline Hill-climbing Simulated annealing Genetic algorithms (briefly) Local search in continuous spaces (very briefly) Chapter
More informationEnergy minimization for the flow in ducts and networks
Energy minimization for the flow in ducts and networks Taha Sochi University College London, Department of Physics & Astronomy, Gower Street, London, WC1E 6BT Email: t.sochi@ucl.ac.uk. Abstract The present
More informationLecture 35 Minimization and maximization of functions. Powell s method in multidimensions Conjugate gradient method. Annealing methods.
Lecture 35 Minimization and maximization of functions Powell s method in multidimensions Conjugate gradient method. Annealing methods. We know how to minimize functions in one dimension. If we start at
More informationMarkov Chains and MCMC
Markov Chains and MCMC Markov chains Let S = {1, 2,..., N} be a finite set consisting of N states. A Markov chain Y 0, Y 1, Y 2,... is a sequence of random variables, with Y t S for all points in time
More informationOn the influence of non-perfect random numbers on probabilistic algorithms
On the influence of non-perfect random numbers on probabilistic algorithms Markus Maucher Bioinformatics Group University of Ulm 01.07.09 Outline 1 Motivation and introduction 2 Theoretical results 3 Experimental
More informationOPTIMIZATION BY SIMULATED ANNEALING: A NECESSARY AND SUFFICIENT CONDITION FOR CONVERGENCE. Bruce Hajek* University of Illinois at Champaign-Urbana
OPTIMIZATION BY SIMULATED ANNEALING: A NECESSARY AND SUFFICIENT CONDITION FOR CONVERGENCE Bruce Hajek* University of Illinois at Champaign-Urbana A Monte Carlo optimization technique called "simulated
More informationAlgorithm Design Strategies V
Algorithm Design Strategies V Joaquim Madeira Version 0.0 October 2016 U. Aveiro, October 2016 1 Overview The 0-1 Knapsack Problem Revisited The Fractional Knapsack Problem Greedy Algorithms Example Coin
More informationMetaheuristics and Local Search
Metaheuristics and Local Search 8000 Discrete optimization problems Variables x 1,..., x n. Variable domains D 1,..., D n, with D j Z. Constraints C 1,..., C m, with C i D 1 D n. Objective function f :
More informationDeep Belief Networks are compact universal approximators
1 Deep Belief Networks are compact universal approximators Nicolas Le Roux 1, Yoshua Bengio 2 1 Microsoft Research Cambridge 2 University of Montreal Keywords: Deep Belief Networks, Universal Approximation
More information8.5 Sequencing Problems
8.5 Sequencing Problems Basic genres. Packing problems: SET-PACKING, INDEPENDENT SET. Covering problems: SET-COVER, VERTEX-COVER. Constraint satisfaction problems: SAT, 3-SAT. Sequencing problems: HAMILTONIAN-CYCLE,
More informationSolving TSP Using Lotka-Volterra Neural Networks without Self-Excitatory
Solving TSP Using Lotka-Volterra Neural Networks without Self-Excitatory Manli Li, Jiali Yu, Stones Lei Zhang, Hong Qu Computational Intelligence Laboratory, School of Computer Science and Engineering,
More informationTimo Latvala Landscape Families
HELSINKI UNIVERSITY OF TECHNOLOGY Department of Computer Science Laboratory for Theoretical Computer Science T-79.300 Postgraduate Course in Theoretical Computer Science Timo Latvala Landscape Families
More informationECE521 Lectures 9 Fully Connected Neural Networks
ECE521 Lectures 9 Fully Connected Neural Networks Outline Multi-class classification Learning multi-layer neural networks 2 Measuring distance in probability space We learnt that the squared L2 distance
More informationMetaheuristics and Local Search. Discrete optimization problems. Solution approaches
Discrete Mathematics for Bioinformatics WS 07/08, G. W. Klau, 31. Januar 2008, 11:55 1 Metaheuristics and Local Search Discrete optimization problems Variables x 1,...,x n. Variable domains D 1,...,D n,
More informationLecture H2. Heuristic Methods: Iterated Local Search, Simulated Annealing and Tabu Search. Saeed Bastani
Simulation Lecture H2 Heuristic Methods: Iterated Local Search, Simulated Annealing and Tabu Search Saeed Bastani saeed.bastani@eit.lth.se Spring 2017 Thanks to Prof. Arne Løkketangen at Molde University
More informationMinicourse on: Markov Chain Monte Carlo: Simulation Techniques in Statistics
Minicourse on: Markov Chain Monte Carlo: Simulation Techniques in Statistics Eric Slud, Statistics Program Lecture 1: Metropolis-Hastings Algorithm, plus background in Simulation and Markov Chains. Lecture
More informationIntelligens Számítási Módszerek Genetikus algoritmusok, gradiens mentes optimálási módszerek
Intelligens Számítási Módszerek Genetikus algoritmusok, gradiens mentes optimálási módszerek 2005/2006. tanév, II. félév Dr. Kovács Szilveszter E-mail: szkovacs@iit.uni-miskolc.hu Informatikai Intézet
More informationThe Quadratic Assignment Problem
The Quadratic Assignment Problem Joel L. O. ; Lalla V. ; Mbongo J. ; Ali M. M. ; Tuyishimire E.; Sawyerr B. A. Supervisor Suriyakat W. (Not in the picture) (MISG 2013) QAP 1 / 30 Introduction Introduction
More informationSwarm Intelligence Traveling Salesman Problem and Ant System
Swarm Intelligence Leslie Pérez áceres leslie.perez.caceres@ulb.ac.be Hayfa Hammami haifa.hammami@ulb.ac.be IRIIA Université Libre de ruxelles (UL) ruxelles, elgium Outline 1.oncept review 2.Travelling
More informationArtificial Neural Networks. MGS Lecture 2
Artificial Neural Networks MGS 2018 - Lecture 2 OVERVIEW Biological Neural Networks Cell Topology: Input, Output, and Hidden Layers Functional description Cost functions Training ANNs Back-Propagation
More informationNeural networks. Chapter 20. Chapter 20 1
Neural networks Chapter 20 Chapter 20 1 Outline Brains Neural networks Perceptrons Multilayer networks Applications of neural networks Chapter 20 2 Brains 10 11 neurons of > 20 types, 10 14 synapses, 1ms
More informationA Particle Swarm Optimization (PSO) Primer
A Particle Swarm Optimization (PSO) Primer With Applications Brian Birge Overview Introduction Theory Applications Computational Intelligence Summary Introduction Subset of Evolutionary Computation Genetic
More informationOptimisation and Operations Research
Optimisation and Operations Research Lecture 15: The Greedy Heuristic Matthew Roughan http://www.maths.adelaide.edu.au/matthew.roughan/ Lecture_notes/OORII/ School of
More informationPart B" Ants (Natural and Artificial)! Langton s Vants" (Virtual Ants)! Vants! Example! Time Reversibility!
Part B" Ants (Natural and Artificial)! Langton s Vants" (Virtual Ants)! 11/14/08! 1! 11/14/08! 2! Vants!! Square grid!! Squares can be black or white!! Vants can face N, S, E, W!! Behavioral rule:!! take
More informationWeek 4: Hopfield Network
Week 4: Hopfield Network Phong Le, Willem Zuidema November 20, 2013 Last week we studied multi-layer perceptron, a neural network in which information is only allowed to transmit in one direction (from
More informationAnalysis of Algorithms. Unit 5 - Intractable Problems
Analysis of Algorithms Unit 5 - Intractable Problems 1 Intractable Problems Tractable Problems vs. Intractable Problems Polynomial Problems NP Problems NP Complete and NP Hard Problems 2 In this unit we
More information3.091 Introduction to Solid State Chemistry. Lecture Notes No. 6a BONDING AND SURFACES
3.091 Introduction to Solid State Chemistry Lecture Notes No. 6a BONDING AND SURFACES 1. INTRODUCTION Surfaces have increasing importance in technology today. Surfaces become more important as the size
More informationA Two-Stage Simulated Annealing Methodology
A Two-Stage Simulated Annealing Methodology James M. Varanelli and James P. Cohoon Department of Computer Science University of Virginia Charlottesville, VA 22903 USA Corresponding Author: James P. Cohoon
More informationLecture 5: Logistic Regression. Neural Networks
Lecture 5: Logistic Regression. Neural Networks Logistic regression Comparison with generative models Feed-forward neural networks Backpropagation Tricks for training neural networks COMP-652, Lecture
More informationSolving the flow fields in conduits and networks using energy minimization principle with simulated annealing
Solving the flow fields in conduits and networks using energy minimization principle with simulated annealing Taha Sochi University College London, Department of Physics & Astronomy, Gower Street, London,
More informationPhase Transition & Approximate Partition Function In Ising Model and Percolation In Two Dimension: Specifically For Square Lattices
IOSR Journal of Applied Physics (IOSR-JAP) ISS: 2278-4861. Volume 2, Issue 3 (ov. - Dec. 2012), PP 31-37 Phase Transition & Approximate Partition Function In Ising Model and Percolation In Two Dimension:
More informationSimulated Annealing for Constrained Global Optimization
Monte Carlo Methods for Computation and Optimization Final Presentation Simulated Annealing for Constrained Global Optimization H. Edwin Romeijn & Robert L.Smith (1994) Presented by Ariel Schwartz Objective
More informationTwo simple lattice models of the equilibrium shape and the surface morphology of supported 3D crystallites
Bull. Nov. Comp. Center, Comp. Science, 27 (2008), 63 69 c 2008 NCC Publisher Two simple lattice models of the equilibrium shape and the surface morphology of supported 3D crystallites Michael P. Krasilnikov
More informationA Monte Carlo Implementation of the Ising Model in Python
A Monte Carlo Implementation of the Ising Model in Python Alexey Khorev alexey.s.khorev@gmail.com 2017.08.29 Contents 1 Theory 1 1.1 Introduction...................................... 1 1.2 Model.........................................
More informationHomework Hint. Last Time
Homework Hint Problem 3.3 Geometric series: ωs τ ħ e s= 0 =? a n ar = For 0< r < 1 n= 0 1 r ωs τ ħ e s= 0 1 = 1 e ħω τ Last Time Boltzmann factor Partition Function Heat Capacity The magic of the partition
More informationHamiltonian Monte Carlo for Scalable Deep Learning
Hamiltonian Monte Carlo for Scalable Deep Learning Isaac Robson Department of Statistics and Operations Research, University of North Carolina at Chapel Hill isrobson@email.unc.edu BIOS 740 May 4, 2018
More informationMVE165/MMG630, Applied Optimization Lecture 6 Integer linear programming: models and applications; complexity. Ann-Brith Strömberg
MVE165/MMG630, Integer linear programming: models and applications; complexity Ann-Brith Strömberg 2011 04 01 Modelling with integer variables (Ch. 13.1) Variables Linear programming (LP) uses continuous
More informationCSC 4510 Machine Learning
10: Gene(c Algorithms CSC 4510 Machine Learning Dr. Mary Angela Papalaskari Department of CompuBng Sciences Villanova University Course website: www.csc.villanova.edu/~map/4510/ Slides of this presenta(on
More information