Metaheuristics and Local Search. Discrete optimization problems. Solution approaches

Similar documents
Metaheuristics and Local Search

Ant Colony Optimization: an introduction. Daniel Chivilikhin

Tabu Search. Biological inspiration is memory the ability to use past experiences to improve current decision making.

Single Solution-based Metaheuristics

Artificial Intelligence Heuristic Search Methods

Methods for finding optimal configurations

A.I.: Beyond Classical Search

PROBLEM SOLVING AND SEARCH IN ARTIFICIAL INTELLIGENCE

Zebo Peng Embedded Systems Laboratory IDA, Linköping University

Capacitor Placement for Economical Electrical Systems using Ant Colony Search Algorithm

Lecture H2. Heuristic Methods: Iterated Local Search, Simulated Annealing and Tabu Search. Saeed Bastani

A Hybrid Simulated Annealing with Kempe Chain Neighborhood for the University Timetabling Problem

Local Search (Greedy Descent): Maintain an assignment of a value to each variable. Repeat:

3D HP Protein Folding Problem using Ant Algorithm

Local Search & Optimization

Hill climbing: Simulated annealing and Tabu search

Solving the Post Enrolment Course Timetabling Problem by Ant Colony Optimization

Outline. Ant Colony Optimization. Outline. Swarm Intelligence DM812 METAHEURISTICS. 1. Ant Colony Optimization Context Inspiration from Nature

Computational statistics

Algorithms and Complexity theory

Sensitive Ant Model for Combinatorial Optimization

5. Simulated Annealing 5.1 Basic Concepts. Fall 2010 Instructor: Dr. Masoud Yaghini

Scaling Up. So far, we have considered methods that systematically explore the full search space, possibly using principled pruning (A* etc.).

ARTIFICIAL INTELLIGENCE

Lecture 4: Simulated Annealing. An Introduction to Meta-Heuristics, Produced by Qiangfu Zhao (Since 2012), All rights reserved

Local Search & Optimization

Lin-Kernighan Heuristic. Simulated Annealing

TUTORIAL: HYPER-HEURISTICS AND COMPUTATIONAL INTELLIGENCE

5. Simulated Annealing 5.2 Advanced Concepts. Fall 2010 Instructor: Dr. Masoud Yaghini

Metaheuristics. 2.3 Local Search 2.4 Simulated annealing. Adrian Horga

Overview. Optimization. Easy optimization problems. Monte Carlo for Optimization. 1. Survey MC ideas for optimization: (a) Multistart

Methods for finding optimal configurations

Local Search and Optimization

Lecture 9 Evolutionary Computation: Genetic algorithms

Motivation, Basic Concepts, Basic Methods, Travelling Salesperson Problem (TSP), Algorithms

Finding optimal configurations ( combinatorial optimization)

Local and Stochastic Search

Updating ACO Pheromones Using Stochastic Gradient Ascent and Cross-Entropy Methods

Computational Intelligence in Product-line Optimization

Intuitionistic Fuzzy Estimation of the Ant Methodology

Optimization Methods via Simulation

LOCAL SEARCH. Today. Reading AIMA Chapter , Goals Local search algorithms. Introduce adversarial search 1/31/14

CSC 4510 Machine Learning

An artificial chemical reaction optimization algorithm for. multiple-choice; knapsack problem.

DISTRIBUTION SYSTEM OPTIMISATION

Design and Analysis of Algorithms

Implementation of Travelling Salesman Problem Using ant Colony Optimization

Vehicle Routing and Scheduling. Martin Savelsbergh The Logistics Institute Georgia Institute of Technology

A reactive framework for Ant Colony Optimization

Artificial Intelligence Methods (G5BAIM) - Examination

Natural Computing. Lecture 11. Michael Herrmann phone: Informatics Forum /10/2011 ACO II

Bounded Approximation Algorithms

Meta heuristic algorithms for parallel identical machines scheduling problem with weighted late work criterion and common due date

A parallel metaheuristics for the single machine total weighted tardiness problem with sequence-dependent setup times

A pruning pattern list approach to the permutation flowshop scheduling problem

Swarm Intelligence Traveling Salesman Problem and Ant System

Introduction to Simulated Annealing 22c:145

CS 331: Artificial Intelligence Local Search 1. Tough real-world problems

Fundamentals of Metaheuristics

Self-Adaptive Ant Colony System for the Traveling Salesman Problem

Convergence of Ant Colony Optimization on First-Order Deceptive Systems

A Short Convergence Proof for a Class of Ant Colony Optimization Algorithms

Artificial Intelligence Methods (G5BAIM) - Examination

Part B" Ants (Natural and Artificial)! Langton s Vants" (Virtual Ants)! Vants! Example! Time Reversibility!

Chapter 8: Introduction to Evolutionary Computation

Available online at ScienceDirect. Procedia Computer Science 20 (2013 ) 90 95

Ant Colony Optimization for Resource-Constrained Project Scheduling

A Note on the Parameter of Evaporation in the Ant Colony Optimization Algorithm

Meta-heuristics for combinatorial optimisation

Lecture 15: Genetic Algorithms

GENETIC ALGORITHM FOR CELL DESIGN UNDER SINGLE AND MULTIPLE PERIODS

Tabu search for a single machine scheduling problem with rejected jobs, setups and deadlines

Hybrid Evolutionary and Annealing Algorithms for Nonlinear Discrete Constrained Optimization 1. Abstract. 1 Introduction

Towards a Deterministic Model for Course Timetabling

12. LOCAL SEARCH. gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria

Quadratic Multiple Knapsack Problem with Setups and a Solution Approach

Evolutionary Functional Link Interval Type-2 Fuzzy Neural System for Exchange Rate Prediction

AMultiobjective optimisation problem (MOP) can be

c Copyright by Honghai Zhang 2001

Swarm intelligence: Ant Colony Optimisation

Mining Spatial Trends by a Colony of Cooperative Ant Agents

GRASP heuristics for discrete & continuous global optimization

A hybrid heuristic for minimizing weighted carry-over effects in round robin tournaments

Matt Heavner CSE710 Fall 2009

UNIVERSITY OF NAIROBI FACULTY OF ENGINEERING DEPARTMENT OF ELECTRICAL AND INFORMATION ENGINEERING

SIMU L TED ATED ANNEA L NG ING

Research Article A Novel Differential Evolution Invasive Weed Optimization Algorithm for Solving Nonlinear Equations Systems

Local search algorithms

School of EECS Washington State University. Artificial Intelligence

Evolutionary computation

CS 781 Lecture 9 March 10, 2011 Topics: Local Search and Optimization Metropolis Algorithm Greedy Optimization Hopfield Networks Max Cut Problem Nash

Pengju

Beta Damping Quantum Behaved Particle Swarm Optimization

A tabu search algorithm for the minmax regret minimum spanning tree problem with interval data

Using Evolutionary Techniques to Hunt for Snakes and Coils

CS 380: ARTIFICIAL INTELLIGENCE

An Analysis on Recombination in Multi-Objective Evolutionary Optimization

METAHEURISTICS FOR HUB LOCATION MODELS

Formation of Student Groups with the Help of Optimization

An ACO Algorithm for the Most Probable Explanation Problem

Transcription:

Discrete Mathematics for Bioinformatics WS 07/08, G. W. Klau, 31. Januar 2008, 11:55 1 Metaheuristics and Local Search Discrete optimization problems Variables x 1,...,x n. Variable domains D 1,...,D n, with D j Z. Constraints C 1,...,C m, with C i D 1 D n. Objective function f : D 1 D n R, to be minimized. Solution approaches Complete (exact) algorithms systematic search Integer linear programming Finite domain constraint programming Approximate algorithms Heuristic approaches heuristic search Constructive methods: construct solutions from partial solutions Local search: improve solutions through neighborhood search Metaheuristics: Combine basic heuristics in higher-level frameworks Polynomial-time approximation algorithms for NP-hard problems Metaheuristics Heuriskein (ευρισκειν): to find Meta: beyond, in an upper level Survey paper: C. Blum, A. Roli: Metaheuristics in Combinatorial Optimization, ACM Computing Surveys, Vol. 35, 2003. Characteristics Metaheuristics are strategies that guide the search process. The goal is to efficiently explore the search space in order to find (near-) optimal solutions. Metaheuristic algorithms are approximate and usually non-deterministic. They may incorporate mechanisms to avoid getting trapped in confined areas of the search space. Characteristics (2) The basic concepts of metaheuristics permit an abstract level description. Metaheuristics are not problem-specific. Metaheuristics may make use of domain-specific knowledge in the form of heuristics that are controlled by the upper level strategy.

2 Discrete Mathematics for Bioinformatics WS 07/08, G. W. Klau, 31. Januar 2008, 11:55 Today more advanced metaheuristics use search experience (embodied in some form of memory) to guide the search. Classification of metaheuristics Single point search (trajectory methods) vs. population-based search Nature-inspired vs. non-nature inspired Dynamic vs. static objective function One vs. various neighborhood structures Memory usage vs. memory-less methods I. Trajectory methods Basic local search: iterative improvement Simulated annealing Tabu search Explorative search methods Greedy Randomized Adaptive Search Procedure (GRASP) Variable Neighborhood Search (VNS) Guided Local Search (GLS) Iterated Local Search (ILS) Local search Find an initial solution s Define a neighborhood N (s) Explore the neighborhood Proceed with selected neighbor procedure SimpleDescent(solution s) repeat choose s N (s) if f (s ) < f (s) then s s end if until f (s ) f (s), s N (s) end Simple descent Local and global minima

Discrete Mathematics for Bioinformatics WS 07/08, G. W. Klau, 31. Januar 2008, 11:55 3 Deepest descent procedure DeepestDescent(solution s) repeat choose s N (s) with f (s ) f (s ), s N (s) if f (s ) < f (s) then s s end if until f (s ) f (s), s N (s) end Problem: Local minima Multistart and deepest descent procedure Multistart iter 1 f (Best) repeat choose a starting solution s 0 at random s DeepestDescent(s 0 ) if f (s) < f (Best) then Best s end if iter iter + 1 until iter = IterMax end Simulated annealing Kirkpatrick 83 Anneal: to heat and then slowly cool (esp. glass or metal) to reach minimal energy state Like standard local search, but sometimes accept worse solution.

4 Discrete Mathematics for Bioinformatics WS 07/08, G. W. Klau, 31. Januar 2008, 11:55 Select random solution from the neighborhood and accept it with probability Boltzmann distribution { 1, if f (new) < f (old), p = exp( (f (new) f (old))/t ), else. Start with high temperature T, and gradually lower it cooling schedule Acceptance probability f(x, y) 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 2 4 6 8 10 5 10 15 20 25 30 35 40 45 50 Algorithm s GenerateInitialSolution() T T 0 while termination conditions not met do s PickAtRandom(N (s)) if (f (s ) < f (s)) then s s else Accept s as new solution with probability p(t,s,s) endif Update(T ) endwhile Tabu search Glover 86

Discrete Mathematics for Bioinformatics WS 07/08, G. W. Klau, 31. Januar 2008, 11:55 5 Local search with short term memory, to escape local minima and to avoid cycles. Tabu list: Keep track of the last r moves, and don t allow going back to these. Allowed set: Solutions that do not belong to the tabu list. Select solution from allowed set, add to tabu list, and update tabu list. Basic algorithm s GenerateInitialSolution() TabuList /0 while termination conditions not met do s ChooseBestOf(N (s) \ TabuList) Update(TabuList) endwhile Choices in tabu search Neighborhood Size of tabu list tabu tenure Kind of tabu to use (complete solutions vs. attributes) tabu conditions Aspiration criteria Termination condition Long-term memory: recency, frequency, quality, influence Refined algorithm s GenerateInitialSolution() Initialize TabuLists (TL 1,...,TL r ) k 0 while termination conditions not met do AllowedSet(s,k) {s N (s) s does not violate a tabu condition or satisfies at least one aspiration condition } s ChooseBestOf(AllowedSet(s, k)) UpdateTabuListsAndAspirationConditions() k k + 1 endwhile II. Population-based search Evolutionary computation Ant colony optimization Evolutionary computation Idea: Mimic evolution - obtain better solutions by combining current ones.

6 Discrete Mathematics for Bioinformatics WS 07/08, G. W. Klau, 31. Januar 2008, 11:55 Keep several current solutions, called population or generation. Create new generation: select a pool of promising solutions, based on a fitness function. create new solutions by combining solutions in the pool in various ways recombination, crossover. add random mutations. Variants: Evolutionary programming, evolutionary strategies, genetic algorithms P GeneralInitialPopulation() Evaluate(P) while termination conditions not met do P Recombine(P) P Mutate(P ) Evaluate(P ) P Select(P P) endwhile Algorithm Crossover and mutations Individuals (solutions) often coded as bit vectors Crossover operations provide new individuals, e.g. 101101 0110 000110 1011 101101 1011 000110 0110 Mutations often helpful, e.g., swap random bit. Further issues Individuals vs. solutions Evolution process: generational replacement vs. steady state, fixed vs. variable population size Use of neighborhood structure to define recombination partners (structured vs. unstructured populations) Two-parent vs. multi-parent crossover Infeasible individuals: reject/penalize/repair Intensification by local search Diversification by mutations Ant colony optimization Dorigo 92 Observation: Ants are able to find quickly the shortest path from their nest to a food source how? Each ant leaves a pheromone trail. When presented with a path choice, they are more likely to choose the trail with higher pheromone concentration.

Discrete Mathematics for Bioinformatics WS 07/08, G. W. Klau, 31. Januar 2008, 11:55 7 The shortest path gets high concentrations because ants choosing it can return more often. Ant colony optimization (2) Ants are simulated by individual (ant) agents swarm intelligence Each decision variable has an associated artificial pheromone level. By dispatching a number of ants, the pheromone levels are adjusted according to how useful they are. Pheromone levels may also evaporate to discourage suboptimal solutions. Construction graph Complete graph G = (C, L) C solution components L connections Pheromone trail values τ i, for c i C. Heuristic values η i Moves in the graph depend on transition probabilities [η r ] α [τ r ] β p(c r s a [c l ]) = [η u ] α [τ u ] β if c r J(s a [c l ]) c u J(s a [c l ]) 0 otherwise Algorithm (ACO) InitializePheromoneValues while termination conditions not met do ScheduleActivities AntBasedSolutionConstruction() PheromoneUpdate() DaemonActions() % optional endscheduleactivities endwhile Pheromone Update Set where τ s a j = τ j = (1 ρ)τ j + τ s a j, a A { F(s a ) if c j is component of s a 0 otherwise. Intensification and diversification

8 Discrete Mathematics for Bioinformatics WS 07/08, G. W. Klau, 31. Januar 2008, 11:55 Glover and Laguna 1997 The main difference between intensification and diversification is that during an intensification stage the search focuses on examining neighbors of elite solutions.... The diversification stage on the other hand encourages the search process to examine unvisited regions and to generate solutions that differ in various significant ways from those seen before. Case study: Time tabling Rossi-Doria et al. 2002 http://iridia.ulb.ac.be/~meta/newsite/downloads/tt_comparison.pdf Set of events E, set of rooms R, set of students S, set of features F Each student attends a number of events and each room has a size. Assign all events a timeslot and a room so that the following hard constraints are satisfied: no student attends more than one event at the same time. the room is big enough for all attending students and satisfies all features required by the event. only one event is in each room at any timeslot. Case study: Time tabling (2) Penalties for soft constraint violations a student has a class in the last slot of a day. a student has more than two classes in a row. a student has a single class on a day. Objective: Minimize number of soft constraint violations in a feasible solution Common neighborhood structure Solution ordered list of length E The i-th element indicates the timeslot to which event i is assigned. Room assignments generated by matching algorithm. Neighborhood: N = N 1 N 2 N 1 moves a single event to a different timeslot N 2 swaps the timeslots of two events. Common local search procedure Stochastic first improvement local search Go through the list of all the events in a random order. Try all the possible moves in the neighbourhood for every event involved in constraint violations, until improvement is found. Solve hard constraint violations first. If feasibility is reached, look at soft constraint violations as well.

Discrete Mathematics for Bioinformatics WS 07/08, G. W. Klau, 31. Januar 2008, 11:55 9 Metaheuristics 1. Evolutionary algorithm 2. Ant colony optimization 3. Iterated local search 4. Simulated annealing 5. Tabu search 1. Evolutionary algorithm Steady-state evolution process: at each generation only one couple of parent individuals is selected for reproduction. Tournament selection: choose randomly a number of individuals from the current population and select the best ones in terms of fitness function as parents. Fitness function: Weighted sum of hard and soft constraint violations, f (s) := #hcv(s) C + #scv(s) 1. Evolutionary algorithm (2) Uniform crossover: for each event a timeslot s assignment is inherited from the first or second parent with equal probability. Mutation: Random move in an extended neighbourhood (3-cycle permutation). Search parameters: Population size n = 10, tournament size = 5, crossover rate α = 0.8, mutation rate β = 0.5 Find a balance between the number of steps in local search and the number of generations. 2. Ant colony optimization At each iteration, each of m ants constructs, event by event, a complete assignment of the events to the timeslots. To make an assignment, an ant takes the next event from a pre-ordered list, and probabilistically chooses a timeslot, guided by two types of information: 1. heuristic information: evaluation of the constraint violations caused by making the assignment, given the assignments already made, 2. pheromone information: estimate of the utility of making the assignment, as judged by previous iterations of the algorithm. Matrix of pheromone values τ : E T R 0. Initialization to a parameter τ 0, update by local and global rules. 2. Ant colony optimization (2) An event-timeslot pair which has been part of good solutions will have a high pheromone value, and consequently have a higher chance of being chosen again.

10 Discrete Mathematics for Bioinformatics WS 07/08, G. W. Klau, 31. Januar 2008, 11:55 At the end of the iterative construction, an event-timeslot assignment is converted into a candidate solution (timetable) using the matching algorithm. This candidate solution is further improved by the local search routine. After all m ants have generated their candidate solution, a global update on the pheromone values is performed using the best solution found since the beginning. 3. Iterated local search Provide new starting solutions obtained from perturbations of a current solution Often leads to far better results than using random restart. Four subprocedures 1. GenerateInitialSolution: generates an initial solution s 0 2. Perturbation: modifies the current solution s leading to some intermediate solution s, 3. LocalSearch: obtains an improved solution s, 4. AcceptanceCriterion: decides to which solution the next perturbation is applied. Perturbation Three types of moves P1: choose a different timeslot for a randomly chosen event; P2: swap the timeslots of two randomly chosen events; P3: choose randomly between the two previous types of moves and a 3-exchange move of timeslots of three randomly chosen events. Strategy Apply each of these different moves k times, where k is chosen of the set {1; 5; 10; 25; 50; 100}. Take random choices according to a uniform distribution. Acceptance criteria Random walk: Always accept solution returned by local search Accept if better Simulated annealing SA1: P 1 (s,s ) = e f (s) f (s ) T SA2: P 2 (s,s ) = e f (s) f (s ) T f (s best ) Best parameter setting (for medium instances): P1, k = 5, SA1 with T = 0.1 4. Simulated annealing Two phases

Discrete Mathematics for Bioinformatics WS 07/08, G. W. Klau, 31. Januar 2008, 11:55 11 1. Search for feasible solutions, i.e., satisfy all hard constraints. 2. Minimize soft constraint violations. Strategies Initial temperature: Sample the neighbourhood of a randomly generated solution, compute average value of the variation in the evaluation function, and multiply this value by a given factor. Cooling schedule 1. Geometric cooling: T n+1 = α T n, 0 < α < 1 2. Temperature reheating: Increase temperature if rejection ratio (= number of moves rejected/number of moves tested) exceeds a given limit. Temperature length: Proportional to the size of the neighborhood 5. Tabu search Moves done by moving one event or by swapping two events. Tabu list: Forbid a move if at least one of the events involved has been moved less than l steps before. Size of tabu list l: number of events divided by a suitable constant k (here k = 100). Variable neighbourhood set: every move is a neighbour with probability 0.1 decrease probability of generating cycles and reduce the size of neighbourhood for faster exploration. Aspiration criterion: perform a tabu move if it improves the best known solution. Evaluation http://iridia.ulb.ac.be/~msampels/ttmn.data/ 5 small, 5 medium, 2 large instances Type small medium large E 100 400 400 S 80 200 400 R 5 10 10 500 resp. 50 resp. 20 independent trials per metaheuristic per instance. Diagrams show results of all trials on a single instance. Boxes show the range between 25% and 75% quantile. Evaluation (2) Small: All algorithms reach feasibility in every run, ILS best, TS worst overall performance Medium: SA best, but does not achieve feasibility in some runs. ACO worst. Large01: Most metaheuristics do not even achieve feasibility. TS feasibility in about 8% of the trials.

12 Discrete Mathematics for Bioinformatics WS 07/08, G. W. Klau, 31. Januar 2008, 11:55 Large02: ILS best, feasibility in about 97% of the trials, against 10% for ACO and GA. SA never reaches feasibility. TS gives always feasible solutions, but with worse results than ILS and AC0 in terms of soft constraints. Instance: medium01.tim Time: 900 sec # Soft Constraint Violations # Soft Constraint Violations 100 150 200 250 300 ACO GA ILS SA TS 300 250 200 150 100 0 50 100 150 200 250 ACO GA ILS SA TS Rank Ranks Percentage of Invalid Solutions TS SA ILS GA ACO Percent 0 20 40 60 80 100 0 50 100 150 200 250 ACO GA ILS SA TS Metaheuristic Instance: large02.tim Time: 9000 sec # Soft Constraint Violations # Soft Constraint Violations 700 800 900 1000 1100 1200 ACO GA ILS SA TS 1200 1100 1000 900 800 700 0 10 20 30 40 50 60 70 ACO GA ILS TS Rank Ranks Percentage of Invalid Solutions TS SA ILS GA ACO Percent 0 20 40 60 80 100 0 10 20 30 40 50 60 70 ACO GA ILS SA TS Metaheuristic