Hill climbing: Simulated annealing and Tabu search
|
|
- Job Esmond McCormick
- 5 years ago
- Views:
Transcription
1 Hill climbing: Simulated annealing and Tabu search Heuristic algorithms Giovanni Righini University of Milan Department of Computer Science (Crema)
2 Hill climbing Instead of repeating local search, it is possible to carry on the search, after a local optimum has been reached: either changing the neighborhood or the objective or accepting sub-optimal solutions and possibly worsening moves. x := arg min x N(x) z(x) The main problem with the latter alternative is looping, i.e. cyclically visiting the same solutions. The two main strategies allowing to control this effect are Simulated Annealing (SA), which uses randomness; Tabu Search (TS), which uses memory.
3 Annealing The SA algorithm derives from the Metropolis algorithm (1953), that simulates a physical process: a metal is brought to a temperature close to the melting point, so that particles spread in a random and uniform way; then it is cooled very slowly, so that energy decreases, but there is enough time to converge to thermal equilibrium. The aim of the process is to obtain a regular crystal lattice with no defects, corresponding to the ground state (the configuration of minimum energy) a material with useful physical properties.
4 Simulated Annealing The correspondence with combinatorial optimization is the following: the particles correspond to variables (the spin of the particles corresponds to a binary domain); the states of the physical system correspond to solutions; the energy corresponds to the objective function; the ground state corresponds to globally minima solutions; the state transitions correspond to local search moves; the temperature corresponds to a parameter. This suggests to use Metropolis algorithm for optimization purposes. According to thermodynamics laws at thermal equilibrium each state has probability Ei e k T (i) = π T j S e E j k T with S the set of states, T the temperature and k is Boltzmann constant. It describes what happens at thermal equilibrium when the system is continuously subject to random transitions between states.
5 Metropolis algorithm Metropolis algorithm generates a random sequence of states the current state i has energy E i the algorithm perturbs i, generating a state j with energy E j the transition from i to j occurs with probability π T (i, j) = { 1 if Ej < E i e E i E j k T = π (j) π (i) if E j E i The Simulated Annealing algorithm simulates this.
6 Simulated Annealing ( ) Algorithm SimulatedAnnealing I, x (0), T x := x (0) ; x := x (0) ; While Stop() = false do x := RandomExtract(N, x); { random uniform extraction } If z(x ) < z(x) or U[0; 1] < e z(x) z(x ) T then x := x ; If z(x ) < z(x ) then x := x ; T := Aggiorna(T); EndWhile; Return (x, z(x )); Remark: it is possible to do worsening moves even when improving moves exist because the neighborhood is not fully explored.
7 Acceptance criterion π T (x, x ) = { 1 if z(x ) < z(x) e z(x) z(x ) T if z(x ) z(x) The temperature parameter T calibrates the probability of accepting worsening moves with T 0 they are frequently accepted: the search tends to diversify, as in a random walk; with T 0 they are frequently rejected: the search tends to intensify, as in steepest descent. Note the analogy with ILS.
8 Convergence to the optimum The probability that the current solution is x is the sum over all possible predecessor states x of the probabilities of extracting move (x, x ), which is uniform, and accepting the move, which is { 1 if z(x ) < z(x) π T (x, x ) = e z(x) z(x ) T if z(x ) z(x) Hence, at each step it only depends on the probability of the previous state: random variable x forms a Markov chain. For each given value of T, the transition probabilities are uniform: the Markov chain is homogeneous. If the search space is connected with respect to neighborhood N, the probability of reaching each state is strictly positive and the Markov chain is irreducible. Under these assumptions, the probability of the states tends to a stationary distribution, independent of the initial solution.
9 Convergence to the optimum The stationary distribution is that indicated in thermodynamics for the thermal equilibrium of physical systems, and it favors good solutions: π T (x) = e z(x) T x X where X is the feasible region. e z(x) T for each x X If T 0, the distribution tends to a limit distribution 1 π(x) = lim π T (x) = X T 0 for x X 0 for x X \ X which corresponds to guaranteed convergence to a global optimum (!)
10 Convergence to the optimum However, the result holds at equilibrium and low values of T imply high probability of visiting a global optimum; slow convergence to the optimum (many moves are rejected). In finite time, using a lower value for T does not always improve the result. On the other side, it is not necessary to visit global optima often: one visit is enough to discover the optimum. In practice, T is updated, decreasing it according to a cooling schedule. The initial value T [0] is set high enough to allow accepting many moves; low enough to allow rejecting the worst moves. After sampling the first neighborhood N(x (0) ), usually one fixes T [0] so that a given fraction (e.g., 90%) of N(x (0) ) is accepted.
11 Cooling schedule In each outer iteration r = 0,...,m: a constant value T [r] is used for l [r] inner iterations T [r] is updated according to an exponential function with 0 < α r < 1; l [r] is also updated T [r] := α r T [0] increasing with r (e.g. linearly) depending on the diameter of the search graph (and hence on the size of the instance). If T is variable, we have a non-homogeneous Markov chain, but if T decreases slowly enough, it converges to the global optimum the parameters depend on the instance (in particular, on z( x) z(x ), where x is the best local-but-not-global optimum).
12 Computational efficiency and variants Instead of computing probabilities through an exponential function, it is convenient to pre-compute a table of values of e δ T for each possible δ = z(x) z(x ) In adaptive simulated annealing algorithms the parameter T depends on the results obtained: T is tuned so that a give fraction of N(x) is likely to be accepted; T is increased if the solution does not improve significantly and decreased otherwise.
13 Tabu Search (TS) Tabu Search (Glover, 1986) keeps the same selection criterion of the steepest descent algorithm x := arg min x N(x) {z(x)} i.e. selecting the best solution in the neighborhood of the current one. If trivially implemented, this would cause loops in the search. The idea is to forbid already visited solutions, by imposing some tabu to the search: x := arg min where V is the set of tabu solutions. x N(x)\V {z(x)} The principle is very simple, but the crucial issue is how to make it efficient.
14 Tabu search An exchange heuristic based on the exhaustive exploration of the neighborhood with a tabu on the already visited solutions requires: 1. to evaluate the feasibility of each subset produced by the exchanges (when it is not possible to guarantee it a priori); 2. to evaluate the cost of each feasible solution; 3. to evaluate the tabu/non-tabu status of each promising feasible solution; 4. to select the best feasible and non-tabu solution. An easy way to evaluate the tabu status is to record the already visited solutions in suitable data-structure (called tabu list); to check whether each explored solution belongs to the tabu list or not.
15 Making tabu search efficient This is very inefficient: the check requires linear time in the size of the tabu list (it can be reduced with hash tables and search trees) the number of visited solutions increases with time; the memory occupation increases with time. The Cancellation Sequence Method and the Reverse Elimination Method tackle these problems, exploiting the observation that in general visited solutions form a chain of little variations; few visited solutions belong to the neighborhood of the current one. The idea is to concentrate on the variations, not on the solutions: to keep a list of moves, instead of a list of solutions; to evaluate the overall variations done; to find solutions that have been subject to few/little changes (recent solutions or solutions subject to changes that have been undone later).
16 More reasons for not using tabu solutions There are other phenomena that affect the effectiveness of tabu search. Forbidding already visited solutions may have two different negative effects: it can disconnect the search graph (hence it is would be better to avoid absolute prohibitions) it may restrain exiting from attraction basins; (hence it would be better to apply the tabu status to many other solutions in the same basin). The two observations suggest opposite remedies.
17 Example A tricky example is the following: the ground set E contains L elements; all subsets are feasible: X = 2 E ; the objective combines an additive term which is almost uniform (ǫ 1) and a large negative term in x = E and zero otherwise 1+ǫi for x E i x z(x) = 1+ǫi L 1 for x E i x If we consider the neighborhood made by the solutions at Hamming distance 1 N H1 (x) = {x 2 E : d(x, x ) 1} the problem has a local optimum x = with z( x) = 0, whose attraction basin contains all solutions with x L 1; a global optimum x = E with z(x ) = L(L 1)ǫ/2 1 < 0, whose attraction basin contains all solutions with x L 2.
18 Example Starting from x (0) = x = and running Tabu Search forbidding the already visited solutions, the trajectory of the search scans a large part of 2E, going father from x and then closer again, with z oscillating up and down; for values of L 4 it gets stuck in a solution whose neighborhood has been completely explored, although other solutions have not been visited yet; for large values of L (e.g., L = 16), it cannot reach the global optimum.
19 Example The oscillations of the objective function show the drawbacks of the method. The solution x repeatedly goes farther from x (0) = x and then closer to it: it visits almost entirely the attraction basin of x ; eventually it does not leave the basin, it but remains in a solution whose neighborhood is completely tabu.
20 Tabu attributes To overcome these difficulties some simple techniques are used: 1. instead of forbidding visited solutions solutions hare tabu when they possess some attributes in common with the visited solutions: a set A of relevant attributes is defined; a subset Ā of attributes (initially empty) is declared tabu; all solutions with tabu attributes are tabu A(y) Ā y is tabu if a move transforms the current solution x into x, attributes that x had and x does not have are inserted into Ā (in this way x becomes tabu) This means that solutions similar to those already visited are tabu; the search is faster in leaving the attraction basins of the already visited local optima.
21 Temporary tabu and aspiration criteria Since the tabu list generates regions that are difficult or impossible to reach, 2. the tabu status has a limited duration, defined by a number iterations L tabu solutions become accessible again it is possible to re-visit the same solutions (however, if Ā is different, the next iterations will be different). The tabu tenure L is a critical parameter of TS. Since the tabu list could forbid global optima just because they are similar to visited solutions, an aspiration criterion is used: a tabu solution is accepted when it is better than the best incumbent solution. When all solutions in the neighborhood of the current solution are tabu the algorithm accepts the one with the most ancient tabu status.
22 Tabu search Algorithm TabuSearch ( I, x (0), L ) x := x (0) ; x := x (0) ; Ā := ; While Stop() = false do z := + ; For each y N(x) do If z(y) < z then If Tabu ( y, Ā) = false or z (y) < z (x ) then x := y; z := z (y); EndIf EndFor Ā := Ipdate ( Ā, x, L ) ; If z (x ) < z (x ) then x := x ; EndWhile Return (x, z (x ));
23 Tabu attributes Some possible definitions of attribute : an element belongs to the solution (A(x) = x): when the move from x to x deletes an element i from the solution, the tabu status forbids the reinsertion of i in the next L iterations; every solution with element i becomes tabu; an element does not belong to the solution (A(x) = E \ x): when the move from x to x inserts an element i in the solution, the tabu status forbids the deletion of i in the next L iterations; every solution without element i becomes tabu. It is common to use several attributes together, each one with its own tabu tenure and tabu list (e.g., after replacing i with j, it is forbidden to delete j for L in iterations and to reinsert i for L out iterations, with L in L out ).
24 Other examples of attributes: Tabu attributes the value of the objective function the value of an auxiliary function (e.g., the distance from the best incumbent solution) Complex attributes can be obtained by combining simple ones: if a move from x to x replaces element i with element j, we can forbid the replacement of j with i, but we can allow for deleting j only or inserting i only.
25 Efficient evaluation of the tabu status Even when it is based on attributes, the evaluation of the tabu status of a solution must be efficient: scanning the whole solution is not acceptable. Attributes are associated with moves, not with solutions The evaluation can be done in constant time by recording in a data-structure the iteration in which the tabu status begins, for each attribute. When insertions are tabu (the attribute is the presence of an element): at iteration t, it is tabu to insert any i E \ x such that t Ti in + L in at iteration t, we set Ti in = t for each i just deleted from x. When deletions are tabu (the attribute is the absence of an element): at iteration t, it is tabu to delete any i x such that t Ti out + L out at iteration t, we set Ti out = t for each i just inserted into x. If both are used, one vector is enough, since either i x or i E \ x. For more complex attributes matrices or other data-structures are needed.
26 Example: the TSP We consider the neighborhood N R2 generated by the 2-opt exchanges and we use both the presence and the absence of edges as attributes. Initially T ij := for every edge (i, j) A; at each iteration t, the algorithm scans the n(n 1)/2 pairs of edges that can be deleted and the corresponding pairs of edges that would replace them; the move (i, j) that replaces (s i, s i+1 ) and (s j, s j+1 ) with (s i, s j ) and (s i+1, s j+1 ), is tabu at iteration t if one of the following conditions holds: 1. t T si,s i+1 + L out 2. t T sj,s j+1 + L out 3. t T si,s j + L in 4. t T sj+1,s i+1 + L in Once the move (i, j ) has been chosen, the data-structures are updated: 1. T si,s i +1 := t 2. T sj,s j +1 := t 3. T si,s j := t 4. T sj +1,s i +1 := t Since n edges belong to the solution and n(n 2) do not, it is convenient to set L out L in.
27 Example: the KP The neighborhood NH1 contains the solutions at Hamming distance 1 For simplicity we use the attribute flipped variable : a vector T records when each variable i E has been flipped the last time. Let L = 3. t =1 t = 2 T = [ 1 ] T = [ ] t = 3 T = [ 2 1 ]
28 Tuning the tabu tenure The value L of the tabu tenure is of paramount importance: too large values may hide the global optimum and in the worst case they block the search; too small values may leave the search in useless regions and in the worst case they allow for looping. The best value for L in general depends on the size of the instance often slowly increases (a recipe is L O( n)) almost constant values work fine also for different sizes. Extracting L at random from a range [L min ; L max ] breaks loops. Adaptive tabu tenures react to the results of the search updating L within a given range [L min ; L max ] L decreases when the current solution x improves: the search is likely to approach a local optimum and one wants to intensify the search L increases when the current solution x worsens: the search is likely to escape from a visited attraction basin and one wants to diversify the search.
29 Variations In the long range, adaptive techniques tend to loose their effectiveness. Long-term strategies are employed: Reactive Tabu Search: uses efficient data-structures to record visited solutions detects loops if solutions repeat too often, it shifts the range [L min ; L max] to larger values. Frequency-based Tabu Search: records the frequency of each attribute in the solution in data-structures similar to the tabu list; if an attribute occurs very often it favors the moves that insert it, by a modification of z (as in DLS); or it forbids the moves that insert it, or penalizes them by a modification of z. Exploring Tabu Search: re-initializes the search from good solutions already found but never used as current solution (they are the second best solutions in some neighborhood). Granular Tabu Search: modifies the neighborhood by progressively enlarging it.
5. Simulated Annealing 5.1 Basic Concepts. Fall 2010 Instructor: Dr. Masoud Yaghini
5. Simulated Annealing 5.1 Basic Concepts Fall 2010 Instructor: Dr. Masoud Yaghini Outline Introduction Real Annealing and Simulated Annealing Metropolis Algorithm Template of SA A Simple Example References
More informationLecture H2. Heuristic Methods: Iterated Local Search, Simulated Annealing and Tabu Search. Saeed Bastani
Simulation Lecture H2 Heuristic Methods: Iterated Local Search, Simulated Annealing and Tabu Search Saeed Bastani saeed.bastani@eit.lth.se Spring 2017 Thanks to Prof. Arne Løkketangen at Molde University
More informationArtificial Intelligence Heuristic Search Methods
Artificial Intelligence Heuristic Search Methods Chung-Ang University, Jaesung Lee The original version of this content is created by School of Mathematics, University of Birmingham professor Sandor Zoltan
More informationZebo Peng Embedded Systems Laboratory IDA, Linköping University
TDTS 01 Lecture 8 Optimization Heuristics for Synthesis Zebo Peng Embedded Systems Laboratory IDA, Linköping University Lecture 8 Optimization problems Heuristic techniques Simulated annealing Genetic
More informationSingle Solution-based Metaheuristics
Parallel Cooperative Optimization Research Group Single Solution-based Metaheuristics E-G. Talbi Laboratoire d Informatique Fondamentale de Lille Single solution-based metaheuristics Improvement of a solution.
More informationSimulated Annealing for Constrained Global Optimization
Monte Carlo Methods for Computation and Optimization Final Presentation Simulated Annealing for Constrained Global Optimization H. Edwin Romeijn & Robert L.Smith (1994) Presented by Ariel Schwartz Objective
More informationLecture 4: Simulated Annealing. An Introduction to Meta-Heuristics, Produced by Qiangfu Zhao (Since 2012), All rights reserved
Lecture 4: Simulated Annealing Lec04/1 Neighborhood based search Step 1: s=s0; Step 2: Stop if terminating condition satisfied; Step 3: Generate N(s); Step 4: s =FindBetterSolution(N(s)); Step 5: s=s ;
More informationPROBLEM SOLVING AND SEARCH IN ARTIFICIAL INTELLIGENCE
Artificial Intelligence, Computational Logic PROBLEM SOLVING AND SEARCH IN ARTIFICIAL INTELLIGENCE Lecture 4 Metaheuristic Algorithms Sarah Gaggl Dresden, 5th May 2017 Agenda 1 Introduction 2 Constraint
More informationMetaheuristics and Local Search
Metaheuristics and Local Search 8000 Discrete optimization problems Variables x 1,..., x n. Variable domains D 1,..., D n, with D j Z. Constraints C 1,..., C m, with C i D 1 D n. Objective function f :
More information5. Simulated Annealing 5.2 Advanced Concepts. Fall 2010 Instructor: Dr. Masoud Yaghini
5. Simulated Annealing 5.2 Advanced Concepts Fall 2010 Instructor: Dr. Masoud Yaghini Outline Acceptance Function Initial Temperature Equilibrium State Cooling Schedule Stopping Condition Handling Constraints
More informationCombinatorial optimization problems
Combinatorial optimization problems Heuristic Algorithms Giovanni Righini University of Milan Department of Computer Science (Crema) Optimization In general an optimization problem can be formulated as:
More informationMethods for finding optimal configurations
CS 1571 Introduction to AI Lecture 9 Methods for finding optimal configurations Milos Hauskrecht milos@cs.pitt.edu 5329 Sennott Square Search for the optimal configuration Optimal configuration search:
More information3.4 Relaxations and bounds
3.4 Relaxations and bounds Consider a generic Discrete Optimization problem z = min{c(x) : x X} with an optimal solution x X. In general, the algorithms generate not only a decreasing sequence of upper
More informationLin-Kernighan Heuristic. Simulated Annealing
DM63 HEURISTICS FOR COMBINATORIAL OPTIMIZATION Lecture 6 Lin-Kernighan Heuristic. Simulated Annealing Marco Chiarandini Outline 1. Competition 2. Variable Depth Search 3. Simulated Annealing DM63 Heuristics
More informationMetaheuristics and Local Search. Discrete optimization problems. Solution approaches
Discrete Mathematics for Bioinformatics WS 07/08, G. W. Klau, 31. Januar 2008, 11:55 1 Metaheuristics and Local Search Discrete optimization problems Variables x 1,...,x n. Variable domains D 1,...,D n,
More informationMotivation, Basic Concepts, Basic Methods, Travelling Salesperson Problem (TSP), Algorithms
Motivation, Basic Concepts, Basic Methods, Travelling Salesperson Problem (TSP), Algorithms 1 What is Combinatorial Optimization? Combinatorial Optimization deals with problems where we have to search
More informationLocal Search. Shin Yoo CS492D, Fall 2015, School of Computing, KAIST
Local Search Shin Yoo CS492D, Fall 2015, School of Computing, KAIST If your problem forms a fitness landscape, what is optimisation? Local Search Loop Local Search Loop Start with a single, random solution
More informationA pruning pattern list approach to the permutation flowshop scheduling problem
A pruning pattern list approach to the permutation flowshop scheduling problem Takeshi Yamada NTT Communication Science Laboratories, 2-4 Hikaridai, Seika-cho, Soraku-gun, Kyoto 619-02, JAPAN E-mail :
More informationFinding optimal configurations ( combinatorial optimization)
CS 1571 Introduction to AI Lecture 10 Finding optimal configurations ( combinatorial optimization) Milos Hauskrecht milos@cs.pitt.edu 539 Sennott Square Constraint satisfaction problem (CSP) Constraint
More informationSIMU L TED ATED ANNEA L NG ING
SIMULATED ANNEALING Fundamental Concept Motivation by an analogy to the statistical mechanics of annealing in solids. => to coerce a solid (i.e., in a poor, unordered state) into a low energy thermodynamic
More informationMetaheuristics. 2.3 Local Search 2.4 Simulated annealing. Adrian Horga
Metaheuristics 2.3 Local Search 2.4 Simulated annealing Adrian Horga 1 2.3 Local Search 2 Local Search Other names: Hill climbing Descent Iterative improvement General S-Metaheuristics Old and simple method
More informationOptimization Methods via Simulation
Optimization Methods via Simulation Optimization problems are very important in science, engineering, industry,. Examples: Traveling salesman problem Circuit-board design Car-Parrinello ab initio MD Protein
More informationRandom Search. Shin Yoo CS454, Autumn 2017, School of Computing, KAIST
Random Search Shin Yoo CS454, Autumn 2017, School of Computing, KAIST Random Search The polar opposite to the deterministic, examineeverything, search. Within the given budget, repeatedly generate a random
More informationFundamentals of Metaheuristics
Fundamentals of Metaheuristics Part I - Basic concepts and Single-State Methods A seminar for Neural Networks Simone Scardapane Academic year 2012-2013 ABOUT THIS SEMINAR The seminar is divided in three
More informationPengju
Introduction to AI Chapter04 Beyond Classical Search Pengju Ren@IAIR Outline Steepest Descent (Hill-climbing) Simulated Annealing Evolutionary Computation Non-deterministic Actions And-OR search Partial
More information1 Heuristics for the Traveling Salesman Problem
Praktikum Algorithmen-Entwurf (Teil 9) 09.12.2013 1 1 Heuristics for the Traveling Salesman Problem We consider the following problem. We want to visit all the nodes of a graph as fast as possible, visiting
More information12. LOCAL SEARCH. gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria
12. LOCAL SEARCH gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria Lecture slides by Kevin Wayne Copyright 2005 Pearson-Addison Wesley h ttp://www.cs.princeton.edu/~wayne/kleinberg-tardos
More informationMethods for finding optimal configurations
S 2710 oundations of I Lecture 7 Methods for finding optimal configurations Milos Hauskrecht milos@pitt.edu 5329 Sennott Square S 2710 oundations of I Search for the optimal configuration onstrain satisfaction
More informationDesign and Analysis of Algorithms
CSE 0, Winter 08 Design and Analysis of Algorithms Lecture 8: Consolidation # (DP, Greed, NP-C, Flow) Class URL: http://vlsicad.ucsd.edu/courses/cse0-w8/ Followup on IGO, Annealing Iterative Global Optimization
More informationMarkov Chain Monte Carlo. Simulated Annealing.
Aula 10. Simulated Annealing. 0 Markov Chain Monte Carlo. Simulated Annealing. Anatoli Iambartsev IME-USP Aula 10. Simulated Annealing. 1 [RC] Stochastic search. General iterative formula for optimizing
More informationSimulated Annealing. Local Search. Cost function. Solution space
Simulated Annealing Hill climbing Simulated Annealing Local Search Cost function? Solution space Annealing Annealing is a thermal process for obtaining low energy states of a solid in a heat bath. The
More informationHeuristic Optimisation
Heuristic Optimisation Part 8: Simulated annealing Sándor Zoltán Németh http://web.mat.bham.ac.uk/s.z.nemeth s.nemeth@bham.ac.uk University of Birmingham S Z Németh (s.nemeth@bham.ac.uk) Heuristic Optimisation
More informationIntroduction to Simulated Annealing 22c:145
Introduction to Simulated Annealing 22c:145 Simulated Annealing Motivated by the physical annealing process Material is heated and slowly cooled into a uniform structure Simulated annealing mimics this
More informationLocal and Stochastic Search
RN, Chapter 4.3 4.4; 7.6 Local and Stochastic Search Some material based on D Lin, B Selman 1 Search Overview Introduction to Search Blind Search Techniques Heuristic Search Techniques Constraint Satisfaction
More informationMarkov Chain Monte Carlo Methods
Markov Chain Monte Carlo Methods p. /36 Markov Chain Monte Carlo Methods Michel Bierlaire michel.bierlaire@epfl.ch Transport and Mobility Laboratory Markov Chain Monte Carlo Methods p. 2/36 Markov Chains
More informationDISTRIBUTION SYSTEM OPTIMISATION
Politecnico di Torino Dipartimento di Ingegneria Elettrica DISTRIBUTION SYSTEM OPTIMISATION Prof. Gianfranco Chicco Lecture at the Technical University Gh. Asachi, Iaşi, Romania 26 October 2010 Outline
More informationCS 781 Lecture 9 March 10, 2011 Topics: Local Search and Optimization Metropolis Algorithm Greedy Optimization Hopfield Networks Max Cut Problem Nash
CS 781 Lecture 9 March 10, 2011 Topics: Local Search and Optimization Metropolis Algorithm Greedy Optimization Hopfield Networks Max Cut Problem Nash Equilibrium Price of Stability Coping With NP-Hardness
More information7.1 Basis for Boltzmann machine. 7. Boltzmann machines
7. Boltzmann machines this section we will become acquainted with classical Boltzmann machines which can be seen obsolete being rarely applied in neurocomputing. It is interesting, after all, because is
More informationA.I.: Beyond Classical Search
A.I.: Beyond Classical Search Random Sampling Trivial Algorithms Generate a state randomly Random Walk Randomly pick a neighbor of the current state Both algorithms asymptotically complete. Overview Previously
More informationTheory and Applications of Simulated Annealing for Nonlinear Constrained Optimization 1
Theory and Applications of Simulated Annealing for Nonlinear Constrained Optimization 1 Benjamin W. Wah 1, Yixin Chen 2 and Tao Wang 3 1 Department of Electrical and Computer Engineering and the Coordinated
More informationComputational statistics
Computational statistics Combinatorial optimization Thierry Denœux February 2017 Thierry Denœux Computational statistics February 2017 1 / 37 Combinatorial optimization Assume we seek the maximum of f
More informationLocal Search & Optimization
Local Search & Optimization CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2018 Soleymani Artificial Intelligence: A Modern Approach, 3 rd Edition, Chapter 4 Some
More informationVehicle Routing and Scheduling. Martin Savelsbergh The Logistics Institute Georgia Institute of Technology
Vehicle Routing and Scheduling Martin Savelsbergh The Logistics Institute Georgia Institute of Technology Vehicle Routing and Scheduling Part II: Algorithmic Enhancements Handling Practical Complexities
More informationMarkov Chain Monte Carlo The Metropolis-Hastings Algorithm
Markov Chain Monte Carlo The Metropolis-Hastings Algorithm Anthony Trubiano April 11th, 2018 1 Introduction Markov Chain Monte Carlo (MCMC) methods are a class of algorithms for sampling from a probability
More informationLocal Search & Optimization
Local Search & Optimization CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2017 Soleymani Artificial Intelligence: A Modern Approach, 3 rd Edition, Chapter 4 Outline
More informationA tabu search algorithm for the minmax regret minimum spanning tree problem with interval data
Noname manuscript No. (will be inserted by the editor) A tabu search algorithm for the minmax regret minimum spanning tree problem with interval data Adam Kasperski Mariusz Makuchowski Pawe l Zieliński
More informationSpin Glas Dynamics and Stochastic Optimization Schemes. Karl Heinz Hoffmann TU Chemnitz
Spin Glas Dynamics and Stochastic Optimization Schemes Karl Heinz Hoffmann TU Chemnitz 1 Spin Glasses spin glass e.g. AuFe AuMn CuMn nobel metal (no spin) transition metal (spin) 0.1-10 at% ferromagnetic
More informationMarkov Chains and MCMC
Markov Chains and MCMC Markov chains Let S = {1, 2,..., N} be a finite set consisting of N states. A Markov chain Y 0, Y 1, Y 2,... is a sequence of random variables, with Y t S for all points in time
More informationMarkov Chain Monte Carlo Inference. Siamak Ravanbakhsh Winter 2018
Graphical Models Markov Chain Monte Carlo Inference Siamak Ravanbakhsh Winter 2018 Learning objectives Markov chains the idea behind Markov Chain Monte Carlo (MCMC) two important examples: Gibbs sampling
More informationStochastic Networks Variations of the Hopfield model
4 Stochastic Networks 4. Variations of the Hopfield model In the previous chapter we showed that Hopfield networks can be used to provide solutions to combinatorial problems that can be expressed as the
More informationSerious limitations of (single-layer) perceptrons: Cannot learn non-linearly separable tasks. Cannot approximate (learn) non-linear functions
BACK-PROPAGATION NETWORKS Serious limitations of (single-layer) perceptrons: Cannot learn non-linearly separable tasks Cannot approximate (learn) non-linear functions Difficult (if not impossible) to design
More informationMaximum flow problem (part I)
Maximum flow problem (part I) Combinatorial Optimization Giovanni Righini Università degli Studi di Milano Definitions A flow network is a digraph D = (N,A) with two particular nodes s and t acting as
More informationOptimization using Function Values Only
Optimization using Function Values Only by R. Simon Fong A research paper presented to the University of Waterloo in partial fulfillment of the requirement for the degree of Master of Mathematics in Computational
More informationConstructing some PBIBD(2)s by Tabu Search Algorithm
Constructing some PBIBD(2)s by Tabu Search Algorithm Luis B. Morales IIMAS, Universidad Nacional Autónoma de México Apdo. Postal 70-221, México, D.F., 04510, Mexico lbm@servidor.unam.mx Abstract Some papers
More informationExploring the energy landscape
Exploring the energy landscape ChE210D Today's lecture: what are general features of the potential energy surface and how can we locate and characterize minima on it Derivatives of the potential energy
More informationMATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015
ID NAME SCORE MATH 56/STAT 555 Applied Stochastic Processes Homework 2, September 8, 205 Due September 30, 205 The generating function of a sequence a n n 0 is defined as As : a ns n for all s 0 for which
More informationStochastic optimization Markov Chain Monte Carlo
Stochastic optimization Markov Chain Monte Carlo Ethan Fetaya Weizmann Institute of Science 1 Motivation Markov chains Stationary distribution Mixing time 2 Algorithms Metropolis-Hastings Simulated Annealing
More informationOverview. Optimization. Easy optimization problems. Monte Carlo for Optimization. 1. Survey MC ideas for optimization: (a) Multistart
Monte Carlo for Optimization Overview 1 Survey MC ideas for optimization: (a) Multistart Art Owen, Lingyu Chen, Jorge Picazo (b) Stochastic approximation (c) Simulated annealing Stanford University Intel
More informationAnt Colony Optimization: an introduction. Daniel Chivilikhin
Ant Colony Optimization: an introduction Daniel Chivilikhin 03.04.2013 Outline 1. Biological inspiration of ACO 2. Solving NP-hard combinatorial problems 3. The ACO metaheuristic 4. ACO for the Traveling
More information6. APPLICATION TO THE TRAVELING SALESMAN PROBLEM
6. Application to the Traveling Salesman Problem 92 6. APPLICATION TO THE TRAVELING SALESMAN PROBLEM The properties that have the most significant influence on the maps constructed by Kohonen s algorithm
More informationFlow Shop and Job Shop Models
Outline DM87 SCHEDULING, TIMETABLING AND ROUTING Lecture 11 Flow Shop and Job Shop Models 1. Flow Shop 2. Job Shop Marco Chiarandini DM87 Scheduling, Timetabling and Routing 2 Outline Resume Permutation
More informationHopfield Networks and Boltzmann Machines. Christian Borgelt Artificial Neural Networks and Deep Learning 296
Hopfield Networks and Boltzmann Machines Christian Borgelt Artificial Neural Networks and Deep Learning 296 Hopfield Networks A Hopfield network is a neural network with a graph G = (U,C) that satisfies
More informationLecture 2: The Simplex method
Lecture 2 1 Linear and Combinatorial Optimization Lecture 2: The Simplex method Basic solution. The Simplex method (standardform, b>0). 1. Repetition of basic solution. 2. One step in the Simplex algorithm.
More informationNatural Computing. Lecture 11. Michael Herrmann phone: Informatics Forum /10/2011 ACO II
Natural Computing Lecture 11 Michael Herrmann mherrman@inf.ed.ac.uk phone: 0131 6 517177 Informatics Forum 1.42 25/10/2011 ACO II ACO (in brief) ACO Represent solution space Set parameters, initialize
More informationComputational statistics
Computational statistics Markov Chain Monte Carlo methods Thierry Denœux March 2017 Thierry Denœux Computational statistics March 2017 1 / 71 Contents of this chapter When a target density f can be evaluated
More informationMarkov Processes. Stochastic process. Markov process
Markov Processes Stochastic process movement through a series of well-defined states in a way that involves some element of randomness for our purposes, states are microstates in the governing ensemble
More informationThe particle swarm optimization algorithm: convergence analysis and parameter selection
Information Processing Letters 85 (2003) 317 325 www.elsevier.com/locate/ipl The particle swarm optimization algorithm: convergence analysis and parameter selection Ioan Cristian Trelea INA P-G, UMR Génie
More informationIntroduction to integer programming III:
Introduction to integer programming III: Network Flow, Interval Scheduling, and Vehicle Routing Problems Martin Branda Charles University in Prague Faculty of Mathematics and Physics Department of Probability
More information( ) ( ) ( ) ( ) Simulated Annealing. Introduction. Pseudotemperature, Free Energy and Entropy. A Short Detour into Statistical Mechanics.
Aims Reference Keywords Plan Simulated Annealing to obtain a mathematical framework for stochastic machines to study simulated annealing Parts of chapter of Haykin, S., Neural Networks: A Comprehensive
More informationMAX-2-SAT: How Good is Tabu Search in the Worst-Case?
MAX-2-SAT: How Good is Tabu Search in the Worst-Case? Monaldo Mastrolilli IDSIA Galleria 2, 6928 Manno, Switzerland monaldo@idsia.ch Luca Maria Gambardella IDSIA Galleria 2, 6928 Manno, Switzerland luca@idsia.ch
More informationTabu Search. Biological inspiration is memory the ability to use past experiences to improve current decision making.
Tabu Search Developed by Fred Glover in the 1970 s. Dr Glover is a business professor at University of Colorado at Boulder. Developed specifically as a combinatorial optimization tool. Biological inspiration
More informationEfficient Cryptanalysis of Homophonic Substitution Ciphers
Efficient Cryptanalysis of Homophonic Substitution Ciphers Amrapali Dhavare Richard M. Low Mark Stamp Abstract Substitution ciphers are among the earliest methods of encryption. Examples of classic substitution
More information27 : Distributed Monte Carlo Markov Chain. 1 Recap of MCMC and Naive Parallel Gibbs Sampling
10-708: Probabilistic Graphical Models 10-708, Spring 2014 27 : Distributed Monte Carlo Markov Chain Lecturer: Eric P. Xing Scribes: Pengtao Xie, Khoa Luu In this scribe, we are going to review the Parallel
More informationTotally unimodular matrices. Introduction to integer programming III: Network Flow, Interval Scheduling, and Vehicle Routing Problems
Totally unimodular matrices Introduction to integer programming III: Network Flow, Interval Scheduling, and Vehicle Routing Problems Martin Branda Charles University in Prague Faculty of Mathematics and
More informationInteger Linear Programming
Integer Linear Programming Solution : cutting planes and Branch and Bound Hugues Talbot Laboratoire CVN April 13, 2018 IP Resolution Gomory s cutting planes Solution branch-and-bound General method Resolution
More informationModule 1: Analyzing the Efficiency of Algorithms
Module 1: Analyzing the Efficiency of Algorithms Dr. Natarajan Meghanathan Professor of Computer Science Jackson State University Jackson, MS 39217 E-mail: natarajan.meghanathan@jsums.edu What is an Algorithm?
More informationInteger Programming ISE 418. Lecture 8. Dr. Ted Ralphs
Integer Programming ISE 418 Lecture 8 Dr. Ted Ralphs ISE 418 Lecture 8 1 Reading for This Lecture Wolsey Chapter 2 Nemhauser and Wolsey Sections II.3.1, II.3.6, II.4.1, II.4.2, II.5.4 Duality for Mixed-Integer
More informationPowerful tool for sampling from complicated distributions. Many use Markov chains to model events that arise in nature.
Markov Chains Markov chains: 2SAT: Powerful tool for sampling from complicated distributions rely only on local moves to explore state space. Many use Markov chains to model events that arise in nature.
More informationImproving the Asymptotic Performance of Markov Chain Monte-Carlo by Inserting Vortices
Improving the Asymptotic Performance of Markov Chain Monte-Carlo by Inserting Vortices Yi Sun IDSIA Galleria, Manno CH-98, Switzerland yi@idsia.ch Faustino Gomez IDSIA Galleria, Manno CH-98, Switzerland
More informationTravelling Salesman Problem
Travelling Salesman Problem Fabio Furini November 10th, 2014 Travelling Salesman Problem 1 Outline 1 Traveling Salesman Problem Separation Travelling Salesman Problem 2 (Asymmetric) Traveling Salesman
More informationNumerical methods part 2
Numerical methods part 2 Alain Hébert alain.hebert@polymtl.ca Institut de génie nucléaire École Polytechnique de Montréal ENE6103: Week 6 Numerical methods part 2 1/33 Content (week 6) 1 Solution of an
More informationAlgorithms and Complexity theory
Algorithms and Complexity theory Thibaut Barthelemy Some slides kindly provided by Fabien Tricoire University of Vienna WS 2014 Outline 1 Algorithms Overview How to write an algorithm 2 Complexity theory
More informationGradient Descent. Sargur Srihari
Gradient Descent Sargur srihari@cedar.buffalo.edu 1 Topics Simple Gradient Descent/Ascent Difficulties with Simple Gradient Descent Line Search Brent s Method Conjugate Gradient Descent Weight vectors
More informationUnconstrained optimization
Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout
More informationRandom Walks A&T and F&S 3.1.2
Random Walks A&T 110-123 and F&S 3.1.2 As we explained last time, it is very difficult to sample directly a general probability distribution. - If we sample from another distribution, the overlap will
More informationLecture 35 Minimization and maximization of functions. Powell s method in multidimensions Conjugate gradient method. Annealing methods.
Lecture 35 Minimization and maximization of functions Powell s method in multidimensions Conjugate gradient method. Annealing methods. We know how to minimize functions in one dimension. If we start at
More informationLecture # 20 The Preconditioned Conjugate Gradient Method
Lecture # 20 The Preconditioned Conjugate Gradient Method We wish to solve Ax = b (1) A R n n is symmetric and positive definite (SPD). We then of n are being VERY LARGE, say, n = 10 6 or n = 10 7. Usually,
More information3 The Simplex Method. 3.1 Basic Solutions
3 The Simplex Method 3.1 Basic Solutions In the LP of Example 2.3, the optimal solution happened to lie at an extreme point of the feasible set. This was not a coincidence. Consider an LP in general form,
More information6 Markov Chain Monte Carlo (MCMC)
6 Markov Chain Monte Carlo (MCMC) The underlying idea in MCMC is to replace the iid samples of basic MC methods, with dependent samples from an ergodic Markov chain, whose limiting (stationary) distribution
More informationROOT FINDING REVIEW MICHELLE FENG
ROOT FINDING REVIEW MICHELLE FENG 1.1. Bisection Method. 1. Root Finding Methods (1) Very naive approach based on the Intermediate Value Theorem (2) You need to be looking in an interval with only one
More informationUnit 1A: Computational Complexity
Unit 1A: Computational Complexity Course contents: Computational complexity NP-completeness Algorithmic Paradigms Readings Chapters 3, 4, and 5 Unit 1A 1 O: Upper Bounding Function Def: f(n)= O(g(n)) if
More informationGradient Descent. Dr. Xiaowei Huang
Gradient Descent Dr. Xiaowei Huang https://cgi.csc.liv.ac.uk/~xiaowei/ Up to now, Three machine learning algorithms: decision tree learning k-nn linear regression only optimization objectives are discussed,
More informationComputational Intelligence in Product-line Optimization
Computational Intelligence in Product-line Optimization Simulations and Applications Peter Kurz peter.kurz@tns-global.com June 2017 Restricted use Restricted use Computational Intelligence in Product-line
More informationMachine Learning CS 4900/5900. Lecture 03. Razvan C. Bunescu School of Electrical Engineering and Computer Science
Machine Learning CS 4900/5900 Razvan C. Bunescu School of Electrical Engineering and Computer Science bunescu@ohio.edu Machine Learning is Optimization Parametric ML involves minimizing an objective function
More informationDefinition A finite Markov chain is a memoryless homogeneous discrete stochastic process with a finite number of states.
Chapter 8 Finite Markov Chains A discrete system is characterized by a set V of states and transitions between the states. V is referred to as the state space. We think of the transitions as occurring
More informationCS 331: Artificial Intelligence Local Search 1. Tough real-world problems
S 331: rtificial Intelligence Local Search 1 1 Tough real-world problems Suppose you had to solve VLSI layout problems (minimize distance between components, unused space, etc.) Or schedule airlines Or
More informationLOCAL SEARCH. Today. Reading AIMA Chapter , Goals Local search algorithms. Introduce adversarial search 1/31/14
LOCAL SEARCH Today Reading AIMA Chapter 4.1-4.2, 5.1-5.2 Goals Local search algorithms n hill-climbing search n simulated annealing n local beam search n genetic algorithms n gradient descent and Newton-Rhapson
More information5.3 METABOLIC NETWORKS 193. P (x i P a (x i )) (5.30) i=1
5.3 METABOLIC NETWORKS 193 5.3 Metabolic Networks 5.4 Bayesian Networks Let G = (V, E) be a directed acyclic graph. We assume that the vertices i V (1 i n) represent for example genes and correspond to
More informationSimulated Annealing. Chapter Background Survey. Alexander G. Nikolaev and Sheldon H. Jacobson
Chapter 1 Simulated Annealing Alexander G. Nikolaev and Sheldon H. Jacobson Abstract Simulated annealing is a well-studied local search metaheuristic used to address discrete and, to a lesser extent, continuous
More informationBayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2016
Bayesian Networks: Construction, Inference, Learning and Causal Interpretation Volker Tresp Summer 2016 1 Introduction So far we were mostly concerned with supervised learning: we predicted one or several
More information