Hill climbing: Simulated annealing and Tabu search Heuristic algorithms Giovanni Righini University of Milan Department of Computer Science (Crema)
Hill climbing Instead of repeating local search, it is possible to carry on the search, after a local optimum has been reached: either changing the neighborhood or the objective or accepting sub-optimal solutions and possibly worsening moves. x := arg min x N(x) z(x) The main problem with the latter alternative is looping, i.e. cyclically visiting the same solutions. The two main strategies allowing to control this effect are Simulated Annealing (SA), which uses randomness; Tabu Search (TS), which uses memory.
Annealing The SA algorithm derives from the Metropolis algorithm (1953), that simulates a physical process: a metal is brought to a temperature close to the melting point, so that particles spread in a random and uniform way; then it is cooled very slowly, so that energy decreases, but there is enough time to converge to thermal equilibrium. The aim of the process is to obtain a regular crystal lattice with no defects, corresponding to the ground state (the configuration of minimum energy) a material with useful physical properties.
Simulated Annealing The correspondence with combinatorial optimization is the following: the particles correspond to variables (the spin of the particles corresponds to a binary domain); the states of the physical system correspond to solutions; the energy corresponds to the objective function; the ground state corresponds to globally minima solutions; the state transitions correspond to local search moves; the temperature corresponds to a parameter. This suggests to use Metropolis algorithm for optimization purposes. According to thermodynamics laws at thermal equilibrium each state has probability Ei e k T (i) = π T j S e E j k T with S the set of states, T the temperature and k is Boltzmann constant. It describes what happens at thermal equilibrium when the system is continuously subject to random transitions between states.
Metropolis algorithm Metropolis algorithm generates a random sequence of states the current state i has energy E i the algorithm perturbs i, generating a state j with energy E j the transition from i to j occurs with probability π T (i, j) = { 1 if Ej < E i e E i E j k T = π (j) π (i) if E j E i The Simulated Annealing algorithm simulates this.
Simulated Annealing ( ) Algorithm SimulatedAnnealing I, x (0), T x := x (0) ; x := x (0) ; While Stop() = false do x := RandomExtract(N, x); { random uniform extraction } If z(x ) < z(x) or U[0; 1] < e z(x) z(x ) T then x := x ; If z(x ) < z(x ) then x := x ; T := Aggiorna(T); EndWhile; Return (x, z(x )); Remark: it is possible to do worsening moves even when improving moves exist because the neighborhood is not fully explored.
Acceptance criterion π T (x, x ) = { 1 if z(x ) < z(x) e z(x) z(x ) T if z(x ) z(x) The temperature parameter T calibrates the probability of accepting worsening moves with T 0 they are frequently accepted: the search tends to diversify, as in a random walk; with T 0 they are frequently rejected: the search tends to intensify, as in steepest descent. Note the analogy with ILS.
Convergence to the optimum The probability that the current solution is x is the sum over all possible predecessor states x of the probabilities of extracting move (x, x ), which is uniform, and accepting the move, which is { 1 if z(x ) < z(x) π T (x, x ) = e z(x) z(x ) T if z(x ) z(x) Hence, at each step it only depends on the probability of the previous state: random variable x forms a Markov chain. For each given value of T, the transition probabilities are uniform: the Markov chain is homogeneous. If the search space is connected with respect to neighborhood N, the probability of reaching each state is strictly positive and the Markov chain is irreducible. Under these assumptions, the probability of the states tends to a stationary distribution, independent of the initial solution.
Convergence to the optimum The stationary distribution is that indicated in thermodynamics for the thermal equilibrium of physical systems, and it favors good solutions: π T (x) = e z(x) T x X where X is the feasible region. e z(x) T for each x X If T 0, the distribution tends to a limit distribution 1 π(x) = lim π T (x) = X T 0 for x X 0 for x X \ X which corresponds to guaranteed convergence to a global optimum (!)
Convergence to the optimum However, the result holds at equilibrium and low values of T imply high probability of visiting a global optimum; slow convergence to the optimum (many moves are rejected). In finite time, using a lower value for T does not always improve the result. On the other side, it is not necessary to visit global optima often: one visit is enough to discover the optimum. In practice, T is updated, decreasing it according to a cooling schedule. The initial value T [0] is set high enough to allow accepting many moves; low enough to allow rejecting the worst moves. After sampling the first neighborhood N(x (0) ), usually one fixes T [0] so that a given fraction (e.g., 90%) of N(x (0) ) is accepted.
Cooling schedule In each outer iteration r = 0,...,m: a constant value T [r] is used for l [r] inner iterations T [r] is updated according to an exponential function with 0 < α r < 1; l [r] is also updated T [r] := α r T [0] increasing with r (e.g. linearly) depending on the diameter of the search graph (and hence on the size of the instance). If T is variable, we have a non-homogeneous Markov chain, but if T decreases slowly enough, it converges to the global optimum the parameters depend on the instance (in particular, on z( x) z(x ), where x is the best local-but-not-global optimum).
Computational efficiency and variants Instead of computing probabilities through an exponential function, it is convenient to pre-compute a table of values of e δ T for each possible δ = z(x) z(x ) In adaptive simulated annealing algorithms the parameter T depends on the results obtained: T is tuned so that a give fraction of N(x) is likely to be accepted; T is increased if the solution does not improve significantly and decreased otherwise.
Tabu Search (TS) Tabu Search (Glover, 1986) keeps the same selection criterion of the steepest descent algorithm x := arg min x N(x) {z(x)} i.e. selecting the best solution in the neighborhood of the current one. If trivially implemented, this would cause loops in the search. The idea is to forbid already visited solutions, by imposing some tabu to the search: x := arg min where V is the set of tabu solutions. x N(x)\V {z(x)} The principle is very simple, but the crucial issue is how to make it efficient.
Tabu search An exchange heuristic based on the exhaustive exploration of the neighborhood with a tabu on the already visited solutions requires: 1. to evaluate the feasibility of each subset produced by the exchanges (when it is not possible to guarantee it a priori); 2. to evaluate the cost of each feasible solution; 3. to evaluate the tabu/non-tabu status of each promising feasible solution; 4. to select the best feasible and non-tabu solution. An easy way to evaluate the tabu status is to record the already visited solutions in suitable data-structure (called tabu list); to check whether each explored solution belongs to the tabu list or not.
Making tabu search efficient This is very inefficient: the check requires linear time in the size of the tabu list (it can be reduced with hash tables and search trees) the number of visited solutions increases with time; the memory occupation increases with time. The Cancellation Sequence Method and the Reverse Elimination Method tackle these problems, exploiting the observation that in general visited solutions form a chain of little variations; few visited solutions belong to the neighborhood of the current one. The idea is to concentrate on the variations, not on the solutions: to keep a list of moves, instead of a list of solutions; to evaluate the overall variations done; to find solutions that have been subject to few/little changes (recent solutions or solutions subject to changes that have been undone later).
More reasons for not using tabu solutions There are other phenomena that affect the effectiveness of tabu search. Forbidding already visited solutions may have two different negative effects: it can disconnect the search graph (hence it is would be better to avoid absolute prohibitions) it may restrain exiting from attraction basins; (hence it would be better to apply the tabu status to many other solutions in the same basin). The two observations suggest opposite remedies.
Example A tricky example is the following: the ground set E contains L elements; all subsets are feasible: X = 2 E ; the objective combines an additive term which is almost uniform (ǫ 1) and a large negative term in x = E and zero otherwise 1+ǫi for x E i x z(x) = 1+ǫi L 1 for x E i x If we consider the neighborhood made by the solutions at Hamming distance 1 N H1 (x) = {x 2 E : d(x, x ) 1} the problem has a local optimum x = with z( x) = 0, whose attraction basin contains all solutions with x L 1; a global optimum x = E with z(x ) = L(L 1)ǫ/2 1 < 0, whose attraction basin contains all solutions with x L 2.
Example Starting from x (0) = x = and running Tabu Search forbidding the already visited solutions, the trajectory of the search scans a large part of 2E, going father from x and then closer again, with z oscillating up and down; for values of L 4 it gets stuck in a solution whose neighborhood has been completely explored, although other solutions have not been visited yet; for large values of L (e.g., L = 16), it cannot reach the global optimum.
Example The oscillations of the objective function show the drawbacks of the method. The solution x repeatedly goes farther from x (0) = x and then closer to it: it visits almost entirely the attraction basin of x ; eventually it does not leave the basin, it but remains in a solution whose neighborhood is completely tabu.
Tabu attributes To overcome these difficulties some simple techniques are used: 1. instead of forbidding visited solutions solutions hare tabu when they possess some attributes in common with the visited solutions: a set A of relevant attributes is defined; a subset Ā of attributes (initially empty) is declared tabu; all solutions with tabu attributes are tabu A(y) Ā y is tabu if a move transforms the current solution x into x, attributes that x had and x does not have are inserted into Ā (in this way x becomes tabu) This means that solutions similar to those already visited are tabu; the search is faster in leaving the attraction basins of the already visited local optima.
Temporary tabu and aspiration criteria Since the tabu list generates regions that are difficult or impossible to reach, 2. the tabu status has a limited duration, defined by a number iterations L tabu solutions become accessible again it is possible to re-visit the same solutions (however, if Ā is different, the next iterations will be different). The tabu tenure L is a critical parameter of TS. Since the tabu list could forbid global optima just because they are similar to visited solutions, an aspiration criterion is used: a tabu solution is accepted when it is better than the best incumbent solution. When all solutions in the neighborhood of the current solution are tabu the algorithm accepts the one with the most ancient tabu status.
Tabu search Algorithm TabuSearch ( I, x (0), L ) x := x (0) ; x := x (0) ; Ā := ; While Stop() = false do z := + ; For each y N(x) do If z(y) < z then If Tabu ( y, Ā) = false or z (y) < z (x ) then x := y; z := z (y); EndIf EndFor Ā := Ipdate ( Ā, x, L ) ; If z (x ) < z (x ) then x := x ; EndWhile Return (x, z (x ));
Tabu attributes Some possible definitions of attribute : an element belongs to the solution (A(x) = x): when the move from x to x deletes an element i from the solution, the tabu status forbids the reinsertion of i in the next L iterations; every solution with element i becomes tabu; an element does not belong to the solution (A(x) = E \ x): when the move from x to x inserts an element i in the solution, the tabu status forbids the deletion of i in the next L iterations; every solution without element i becomes tabu. It is common to use several attributes together, each one with its own tabu tenure and tabu list (e.g., after replacing i with j, it is forbidden to delete j for L in iterations and to reinsert i for L out iterations, with L in L out ).
Other examples of attributes: Tabu attributes the value of the objective function the value of an auxiliary function (e.g., the distance from the best incumbent solution) Complex attributes can be obtained by combining simple ones: if a move from x to x replaces element i with element j, we can forbid the replacement of j with i, but we can allow for deleting j only or inserting i only.
Efficient evaluation of the tabu status Even when it is based on attributes, the evaluation of the tabu status of a solution must be efficient: scanning the whole solution is not acceptable. Attributes are associated with moves, not with solutions The evaluation can be done in constant time by recording in a data-structure the iteration in which the tabu status begins, for each attribute. When insertions are tabu (the attribute is the presence of an element): at iteration t, it is tabu to insert any i E \ x such that t Ti in + L in at iteration t, we set Ti in = t for each i just deleted from x. When deletions are tabu (the attribute is the absence of an element): at iteration t, it is tabu to delete any i x such that t Ti out + L out at iteration t, we set Ti out = t for each i just inserted into x. If both are used, one vector is enough, since either i x or i E \ x. For more complex attributes matrices or other data-structures are needed.
Example: the TSP We consider the neighborhood N R2 generated by the 2-opt exchanges and we use both the presence and the absence of edges as attributes. Initially T ij := for every edge (i, j) A; at each iteration t, the algorithm scans the n(n 1)/2 pairs of edges that can be deleted and the corresponding pairs of edges that would replace them; the move (i, j) that replaces (s i, s i+1 ) and (s j, s j+1 ) with (s i, s j ) and (s i+1, s j+1 ), is tabu at iteration t if one of the following conditions holds: 1. t T si,s i+1 + L out 2. t T sj,s j+1 + L out 3. t T si,s j + L in 4. t T sj+1,s i+1 + L in Once the move (i, j ) has been chosen, the data-structures are updated: 1. T si,s i +1 := t 2. T sj,s j +1 := t 3. T si,s j := t 4. T sj +1,s i +1 := t Since n edges belong to the solution and n(n 2) do not, it is convenient to set L out L in.
Example: the KP The neighborhood NH1 contains the solutions at Hamming distance 1 For simplicity we use the attribute flipped variable : a vector T records when each variable i E has been flipped the last time. Let L = 3. t =1 t = 2 T = [ 1 ] T = [ ] t = 3 T = [ 2 1 ]
Tuning the tabu tenure The value L of the tabu tenure is of paramount importance: too large values may hide the global optimum and in the worst case they block the search; too small values may leave the search in useless regions and in the worst case they allow for looping. The best value for L in general depends on the size of the instance often slowly increases (a recipe is L O( n)) almost constant values work fine also for different sizes. Extracting L at random from a range [L min ; L max ] breaks loops. Adaptive tabu tenures react to the results of the search updating L within a given range [L min ; L max ] L decreases when the current solution x improves: the search is likely to approach a local optimum and one wants to intensify the search L increases when the current solution x worsens: the search is likely to escape from a visited attraction basin and one wants to diversify the search.
Variations In the long range, adaptive techniques tend to loose their effectiveness. Long-term strategies are employed: Reactive Tabu Search: uses efficient data-structures to record visited solutions detects loops if solutions repeat too often, it shifts the range [L min ; L max] to larger values. Frequency-based Tabu Search: records the frequency of each attribute in the solution in data-structures similar to the tabu list; if an attribute occurs very often it favors the moves that insert it, by a modification of z (as in DLS); or it forbids the moves that insert it, or penalizes them by a modification of z. Exploring Tabu Search: re-initializes the search from good solutions already found but never used as current solution (they are the second best solutions in some neighborhood). Granular Tabu Search: modifies the neighborhood by progressively enlarging it.