Methods for finding optimal configurations

Similar documents
Methods for finding optimal configurations

Finding optimal configurations ( combinatorial optimization)

Constraint satisfaction search. Combinatorial optimization search.

PROBLEM SOLVING AND SEARCH IN ARTIFICIAL INTELLIGENCE

CS 331: Artificial Intelligence Local Search 1. Tough real-world problems

A.I.: Beyond Classical Search

Motivation, Basic Concepts, Basic Methods, Travelling Salesperson Problem (TSP), Algorithms

Local and Stochastic Search

Local Search & Optimization

Local Search and Optimization

Local Search & Optimization

CS 380: ARTIFICIAL INTELLIGENCE

Local search algorithms. Chapter 4, Sections 3 4 1

Metaheuristics. 2.3 Local Search 2.4 Simulated annealing. Adrian Horga

Chapter 4 Beyond Classical Search 4.1 Local search algorithms and optimization problems

Artificial Intelligence Heuristic Search Methods

Lecture 4: Simulated Annealing. An Introduction to Meta-Heuristics, Produced by Qiangfu Zhao (Since 2012), All rights reserved

Simulated Annealing. Local Search. Cost function. Solution space

5. Simulated Annealing 5.1 Basic Concepts. Fall 2010 Instructor: Dr. Masoud Yaghini

Local Search. Shin Yoo CS492D, Fall 2015, School of Computing, KAIST

Random Search. Shin Yoo CS454, Autumn 2017, School of Computing, KAIST

SIMU L TED ATED ANNEA L NG ING

Local search algorithms. Chapter 4, Sections 3 4 1

Summary. AIMA sections 4.3,4.4. Hill-climbing Simulated annealing Genetic algorithms (briey) Local search in continuous spaces (very briey)

Algorithm Design Strategies V

5. Simulated Annealing 5.2 Advanced Concepts. Fall 2010 Instructor: Dr. Masoud Yaghini

Pengju

1 Heuristics for the Traveling Salesman Problem

Metaheuristics and Local Search

Intro to Contemporary Math

Optimization Methods via Simulation

Metaheuristics and Local Search. Discrete optimization problems. Solution approaches

Single Solution-based Metaheuristics

Lecture 9: Search 8. Victor R. Lesser. CMPSCI 683 Fall 2010

22c:145 Artificial Intelligence

School of EECS Washington State University. Artificial Intelligence

Hill climbing: Simulated annealing and Tabu search

LOCAL SEARCH. Today. Reading AIMA Chapter , Goals Local search algorithms. Introduce adversarial search 1/31/14

Scaling Up. So far, we have considered methods that systematically explore the full search space, possibly using principled pruning (A* etc.).

Introduction to Simulated Annealing 22c:145

Design and Analysis of Algorithms

Fundamentals of Metaheuristics

CS 781 Lecture 9 March 10, 2011 Topics: Local Search and Optimization Metropolis Algorithm Greedy Optimization Hopfield Networks Max Cut Problem Nash

Zebo Peng Embedded Systems Laboratory IDA, Linköping University

Unit 1A: Computational Complexity

Optimization. Announcements. Last Time. Searching: So Far. Complete Searching. Optimization

Iterative improvement algorithms. Beyond Classical Search. Outline. Example: Traveling Salesperson Problem

Heuristic Optimisation

Lecture H2. Heuristic Methods: Iterated Local Search, Simulated Annealing and Tabu Search. Saeed Bastani

Review of probabilities

Local and Online search algorithms

Data Structures in Java

Local Search (Greedy Descent): Maintain an assignment of a value to each variable. Repeat:

Local search and agents

Local search algorithms

Overview. Optimization. Easy optimization problems. Monte Carlo for Optimization. 1. Survey MC ideas for optimization: (a) Multistart

12. LOCAL SEARCH. gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria

Spin Glas Dynamics and Stochastic Optimization Schemes. Karl Heinz Hoffmann TU Chemnitz

Lin-Kernighan Heuristic. Simulated Annealing

quantum information technology Tad Hogg HP Labs

Foundations of Artificial Intelligence

Artificial Intelligence Methods (G5BAIM) - Examination

Havrda and Charvat Entropy Based Genetic Algorithm for Traveling Salesman Problem

Improving Repair-based Constraint Satisfaction Methods by Value Propagation

6 Markov Chain Monte Carlo (MCMC)

Who Has Heard of This Problem? Courtesy: Jeremy Kun

Artificial Intelligence

CSC242: Intro to AI. Lecture 5. Tuesday, February 26, 13

3.4 Relaxations and bounds

Algorithms and Complexity theory

Local Beam Search. CS 331: Artificial Intelligence Local Search II. Local Beam Search Example. Local Beam Search Example. Local Beam Search Example

New rubric: AI in the news

Beyond Classical Search

More Approximation Algorithms

Week Cuts, Branch & Bound, and Lagrangean Relaxation

Computational statistics

Fundamentals of Genetic Algorithms

Part B" Ants (Natural and Artificial)! Langton s Vants" (Virtual Ants)! Vants! Example! Time Reversibility!

Self-Adaptive Ant Colony System for the Traveling Salesman Problem

CSC242: Artificial Intelligence. Lecture 4 Local Search

Markov Chain Monte Carlo. Simulated Annealing.

The Story So Far... The central problem of this course: Smartness( X ) arg max X. Possibly with some constraints on X.

12. LOCAL SEARCH. gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria

7.1 Basis for Boltzmann machine. 7. Boltzmann machines

Dynamic Programming. Cormen et. al. IV 15

Lecture Notes 4. Issued 8 March 2018

Genetic Algorithms & Modeling

Swarm Intelligence Traveling Salesman Problem and Ant System

Tabu Search. Biological inspiration is memory the ability to use past experiences to improve current decision making.

Chapter 4: Monte Carlo Methods. Paisan Nakmahachalasint

Ant Colony Optimization: an introduction. Daniel Chivilikhin

Lecture 15: Genetic Algorithms

ARTIFICIAL INTELLIGENCE

Artificial Intelligence Methods (G5BAIM) - Examination

NP-Complete Problems. More reductions

CS/COE

On the influence of non-perfect random numbers on probabilistic algorithms

NP-Completeness. Until now we have been designing algorithms for specific problems

CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 22, 2017

Una descrizione della Teoria della Complessità, e più in generale delle classi NP-complete, possono essere trovate in:

Transcription:

S 2710 oundations of I Lecture 7 Methods for finding optimal configurations Milos Hauskrecht milos@pitt.edu 5329 Sennott Square S 2710 oundations of I Search for the optimal configuration onstrain satisfaction problem: Objective: find a configuration that satisfies all constraints Optimal configuration (state) problem: Objective: find the best configuration (state) The quality of a state: is defined by some quality measure that reflects our preference towards each configuration (or state) Our goal: optimize the configuration according to the quality measure also referred to as objective function 1

Search for the optimal configuration Optimal configuration search problem: onfigurations (states) Each state has a quality measure q(s) that reflects our preference towards each state Goal: find the configuration with the best value Expressed in terms of the objective function s*=argmax s q(s) If the space of configurations we search among is Discrete or finite then it is a combinatorial optimization problem ontinuous then it solved as a parametric optimization problem S 1571 Intro to I Example: Traveling salesman problem Problem: graph with distances tour a path that visits every city once and returns to the start e.g. DE E D Goal: find the shortest tour 2

Example: N queens SP problem Is it possible to formulate the problem as an optimal configuration search problem? Example: N queens SP problem Is it possible to formulate the problem as an optimal configuration search problem? Yes. The quality of a configuration in a SP can be measured by the number of violated constraints Solving: minimize the number of constraint violations # of violations =3 # of violations =1 # of violations =0 3

Example: N queens Originally a SP problem ut it is also possible to formulate the problem as an optimal configuration search problem: onstraints are mapped to the objective cost function that counts the number of violated constraints # of violations =3 # of violations =0 Iterative optimization methods Searching systematically for the best configuration with the DS may not be the best solution Worst case running time: Exponential in the number of variables Solutions to large optimal configuration problems are often found more effectively in practice using iterative optimization methods Examples of Methods: Hill climbing Simulated nnealing Genetic algorithms 4

Iterative optimization methods asic Properties: Search the space of complete configurations Take advantage of local moves Operators make local changes to complete configurations Keep track of just one state (the current state) no memory of past states!!! No search tree is necessary!!! Iterative optimization methods Next configurations local operators urrent configuration q(s) =145 q(s) =150 Quality q(s) =167 q(s) =180 q(s) =167 q(s) =191 5

Iterative optimization methods q(s) =145 urrent configuration local operators Next configurations q(s) =150 Quality q(s) =167 q(s) =180 q(s) =167 q(s) =191 Example: N-queens Local operators for generating the next state: Select a variable (a queen) Reallocate its position 6

Example: Traveling salesman problem Local operator for generating the next state: divide the existing tour into two parts, reconnect the two parts in the opposite order Example: DE E D Example: Traveling salesman problem Local operator for generating the next state: divide the existing tour into two parts, reconnect the two parts in the opposite order Example: DE D E Part 1 E D Part 2 DE 7

Example: Traveling salesman problem Local operator: generates the next configuration (state) by rearranging the existing tour DE D E DE E D E D Searching the configuration space Search algorithms keep only one configuration (the current configuration) E D E D Problem: How to decide about which operator to apply?? E D 8

Search algorithms Strategies to choose the configuration (state) to be visited next: Hill climbing Simulated annealing Extensions to multiple current states: Genetic algorithms Note: Maximization is inverse of the minimization min q( s) max q( s) Hill climbing Only configurations one can reach using local moves are consired as candidates What move the hill climbing makes? value I am currently here E D states 9

Hill climbing Look at the local neighborhood and choose the one with the best value value Worse etter What can go wrong? states Hill climbing Hill climbing can get trapped in the local optimum value No more local improvement etter How to solve the problem? Ideas? states 10

Hill climbing Solution: Multiple restarts of the hill climbing algorithms from different initial states value new starting state may lead to the globally optimal solution states Hill climbing Hill climbing can get clueless on plateaus value? plateau states 11

Hill climbing and n-queens The quality of a configuration is given by the number of constraints violated Then: Hill climbing reduces the number of constraints Min-conflict strategy (heuristic): hoose randomly a variable with conflicts hoose its value such that it violates the fewest constraints Success!! ut not always!!! The local optima problem!!! Simulated annealing n alternative solution to the local optima problem Permits bad moves to states with a lower value q(s) hence lets us escape states that lead to a local optima Gradually decreases the frequency of such moves and their size (parameter controlling it temperature) value Sometimes down lways up states 12

Simulated annealing algorithm ased on a random walk in the configuration space asic iteration step: hoose uniformly at random one of the local neighbors of the current state as a candidate state if the candidate state is better than the current state then end accept the candidate and make it the current state; else calculate the probability p(ept) of accepting it; using p(ept) choose randomly whether to accept or reject the candidate Simulated annealing algorithm The probability p(ept) of the candidate state: The probability of accepting a state with a better objective function value is always 1 The probability of accepting a candidate with a lower objective function value is < 1 and equal: Let E denotes the objective function value (also called energy). where The probability is: p( ccept NEXT) e E T 0 NEXT E / T URRENT Proportional to the energy difference 13

Simulated annealing algorithm Pick randomly from local neighbors Possible moves urrent configuration Energy E =145 Energy E =167 Energy E =180 Energy E =191 Simulated annealing algorithm urrent configuration Random pick Energy E =145 E NEXT E URRENT = 145 167 = -22 p( ccept ) e / T e 22/ T Sometimes accept! Energy E =167 Energy E =180 Energy E =191 14

Simulated annealing algorithm Random pick urrent configuration Energy E =167 Energy E =145 Energy E =180 E NEXT E URRENT = 180 167 > 0 p( ccept) 1 lways accept! Energy E =191 Simulated annealing algorithm The probability of accepting a state with a lower value is p( ccept ) e / T where E NEXT E URRENT The probability p(accept) is: Modulated through a temperature parameter T: for T? for T 0? ooling schedule: Schedule of changes of a parameter T over iteration steps 15

Simulated annealing algorithm The probability of accepting a state with a lower value is p( ccept ) e / T where E NEXT E URRENT The probability is: Modulated through a temperature parameter T: for T the probability of any move approaches 1 for T 0 ooling schedule: Schedule of changes of a parameter T over iteration steps Simulated annealing algorithm The probability of accepting a state with a lower value is p( ccept ) e / T where E NEXT E URRENT The probability is: Modulated through a temperature parameter T: for T the probability of any move approaches 1 for T 0 the probability that a state with smaller value is selected goes down and approaches 0 ooling schedule: Schedule of changes of a parameter T over iteration steps 16

Simulated annealing Simulated annealing algorithm Simulated annealing algorithm developed originally for modeling physical processes (Metropolis et al, 53) Metal cooling and crystallization. ast cooling many faults higher energy Energy minimization (as opposed of maximization in the previous slides) Properties of the simulated annealing methods If temperature T is decreased slowly enough the best configuration (state) is always reached pplications: (very large optimization problems) VLSI design airline scheduling 17