Finding optimal configurations ( combinatorial optimization)

Similar documents
Methods for finding optimal configurations

Constraint satisfaction search. Combinatorial optimization search.

Methods for finding optimal configurations

CS 380: ARTIFICIAL INTELLIGENCE

Local and Stochastic Search

Local Search & Optimization

A.I.: Beyond Classical Search

PROBLEM SOLVING AND SEARCH IN ARTIFICIAL INTELLIGENCE

Local search algorithms. Chapter 4, Sections 3 4 1

Local Search & Optimization

Pengju

Summary. Local Search and cycle cut set. Local Search Strategies for CSP. Greedy Local Search. Local Search. and Cycle Cut Set. Strategies for CSP

Chapter 4 Beyond Classical Search 4.1 Local search algorithms and optimization problems

Summary. AIMA sections 4.3,4.4. Hill-climbing Simulated annealing Genetic algorithms (briey) Local search in continuous spaces (very briey)

Artificial Intelligence

Simulated Annealing. Local Search. Cost function. Solution space

Lecture 9: Search 8. Victor R. Lesser. CMPSCI 683 Fall 2010

Motivation, Basic Concepts, Basic Methods, Travelling Salesperson Problem (TSP), Algorithms

Local search algorithms. Chapter 4, Sections 3 4 1

Local Search and Optimization

5. Simulated Annealing 5.1 Basic Concepts. Fall 2010 Instructor: Dr. Masoud Yaghini

Constraint Satisfaction Problems

22c:145 Artificial Intelligence

CS 331: Artificial Intelligence Local Search 1. Tough real-world problems

Artificial Intelligence Heuristic Search Methods

Random Search. Shin Yoo CS454, Autumn 2017, School of Computing, KAIST

Outline What are CSPs? Standard search and CSPs Improvements. Constraint Satisfaction. Introduction. Introduction

Local Search. Shin Yoo CS492D, Fall 2015, School of Computing, KAIST

Introduction to Simulated Annealing 22c:145

Lecture 4: Simulated Annealing. An Introduction to Meta-Heuristics, Produced by Qiangfu Zhao (Since 2012), All rights reserved

Metaheuristics. 2.3 Local Search 2.4 Simulated annealing. Adrian Horga

Constraint Satisfaction [RN2] Sec [RN3] Sec Outline

Constraint Satisfaction

CS 4100 // artificial intelligence. Recap/midterm review!

Single Solution-based Metaheuristics

LOCAL SEARCH. Today. Reading AIMA Chapter , Goals Local search algorithms. Introduce adversarial search 1/31/14

Iterative improvement algorithms. Beyond Classical Search. Outline. Example: Traveling Salesperson Problem

Introduction to Arti Intelligence

Hill climbing: Simulated annealing and Tabu search

Artificial Intelligence Methods (G5BAIM) - Examination

Zebo Peng Embedded Systems Laboratory IDA, Linköping University

Local search algorithms

5. Simulated Annealing 5.2 Advanced Concepts. Fall 2010 Instructor: Dr. Masoud Yaghini

New rubric: AI in the news

Metaheuristics and Local Search

Scaling Up. So far, we have considered methods that systematically explore the full search space, possibly using principled pruning (A* etc.).

Chapter 6 Constraint Satisfaction Problems

Foundations of Artificial Intelligence

Local search and agents

Metaheuristics and Local Search. Discrete optimization problems. Solution approaches

Improving Repair-based Constraint Satisfaction Methods by Value Propagation

Unit 1A: Computational Complexity

3.4 Relaxations and bounds

Constraint Satisfaction Problems - CSP. March 23, 2017

SIMU L TED ATED ANNEA L NG ING

Travelling Salesman Problem

Section Handout 5 Solutions

Markov decision processes

Data Structures in Java

1 Heuristics for the Traveling Salesman Problem

Fundamentals of Metaheuristics

Algorithm Design Strategies V

Optimization Methods via Simulation

Design and Analysis of Algorithms

Logic reasoning systems

quantum information technology Tad Hogg HP Labs

Heuristic Optimisation

Foundations of Artificial Intelligence

CS Algorithms and Complexity

CISC681/481 AI Midterm Exam

CS221 Practice Midterm

Computational statistics

Overview. Optimization. Easy optimization problems. Monte Carlo for Optimization. 1. Survey MC ideas for optimization: (a) Multistart

Beyond Classical Search

Local and Online search algorithms

CSC242: Intro to AI. Lecture 5. Tuesday, February 26, 13

School of EECS Washington State University. Artificial Intelligence

AUTOMATED REASONING. Agostino Dovier. Udine, October 2, Università di Udine CLPLAB

The Story So Far... The central problem of this course: Smartness( X ) arg max X. Possibly with some constraints on X.

Minimax strategies, alpha beta pruning. Lirong Xia

Lin-Kernighan Heuristic. Simulated Annealing

Ant Colony Optimization: an introduction. Daniel Chivilikhin

Introduction to integer programming III:

Vehicle Routing and Scheduling. Martin Savelsbergh The Logistics Institute Georgia Institute of Technology

Totally unimodular matrices. Introduction to integer programming III: Network Flow, Interval Scheduling, and Vehicle Routing Problems

Artificial Intelligence Methods (G5BAIM) - Examination

CS360 Homework 12 Solution

Lecture H2. Heuristic Methods: Iterated Local Search, Simulated Annealing and Tabu Search. Saeed Bastani

7.1 Basis for Boltzmann machine. 7. Boltzmann machines

Optimization Prof. A. Goswami Department of Mathematics Indian Institute of Technology, Kharagpur. Lecture - 20 Travelling Salesman Problem

More Approximation Algorithms

Who Has Heard of This Problem? Courtesy: Jeremy Kun

The core of solving constraint problems using Constraint Programming (CP), with emphasis on:

ICS 252 Introduction to Computer Design

8.3 Hamiltonian Paths and Circuits

Spin Glas Dynamics and Stochastic Optimization Schemes. Karl Heinz Hoffmann TU Chemnitz

CSC 4510 Machine Learning

CSC242: Artificial Intelligence. Lecture 4 Local Search

Week Cuts, Branch & Bound, and Lagrangean Relaxation

Optimization. Announcements. Last Time. Searching: So Far. Complete Searching. Optimization

Transcription:

CS 1571 Introduction to AI Lecture 10 Finding optimal configurations ( combinatorial optimization) Milos Hauskrecht milos@cs.pitt.edu 539 Sennott Square Constraint satisfaction problem (CSP) Constraint satisfaction problem (CSP) is a configuration search problem where: A state is defined by a set of variables Goal condition is represented by a set constraints on possible variable values Special properties of the CSP allow more specific procedures to be designed and applied for solving them

Example of a CSP: N-queens Goal: n queens placed in non-attacking positions on the board Variables: Represent queens, one for each column: Q1, Q, Q3, Q Values: Row placement of each queen on the board {1,, 3, } Q1 =, Q = Constraints: Qi Q j Q Q i j i j Two queens not in the same row Two queens not on the same diagonal Solving a CSP problem Initial state. No variable is assigned a value. Operators. Assign a value to one of the unassigned variables. Goal condition. All variables are assigned, no constraints are violated. Unassigned: Q1, Q, Q3, Q Assigned: Unassigned: Q, Q3, Q Assigned: Q = 1 1 Unassigned: Q, Q Q3, Assigned: Q = 1 Unassigned: Q 3, Q Assigned: Q =, Q 1 =

Search strategy:? Solving a CSP problem Unassigned: Q, Q, Q Q Assigned: 1 3, Unassigned: Q, Q3, Q Assigned: Q = 1 1 Unassigned: Q, Q Q3, Assigned: Q = 1 Unassigned: Assigned: Q Q, Q 3 =, Q 1 = Solving a CSP problem Search strategy: DFS Constraints may be violated earlier Unassigned: Q, Q, Q Q Assigned: 1 3, Unassigned: Q, Q3, Q Assigned: Q = 1 1 Unassigned: Q, Q Q3, Assigned: Q = 1 Unassigned: Q 3, Q Assigned: Q =, Q 1 =

Solving a CSP problem Search strategy: DFS Constraints may be violated earlier: constraint propagation infers valid and invalid assignments Unassigned: Q, Q, Q Q Assigned: 1 3, Unassigned: Q, Q3, Q Assigned: Q = 1 1 Unassigned: Q, Q Q3, Assigned: Q = 1 Unassigned: Assigned: Q Q, Q 3 =, Q 1 = Constraint propagation Three known techniques for propagating the effects of past assignments and constraints: Value propagation Arc consistency Forward checking Difference: Completeness of inferences Time complexity of inferences.

Solving a CSP problem Search strategy: DFS Constraint propagation infers valid and invalid assignments What candidate to expand first in the DFS? Unassigned: Q, Q, Q Q Assigned: 1 3, Unassigned: Q, Q3, Q Assigned: Q = 1 1 Unassigned: Q, Q Q3, Assigned: Q = 1 Unassigned: Assigned: Q Q, Q 3 =, Q 1 = Examples: map coloring Heuristics for CSP Heuristics Most constrained variable Country E is the most constrained one (cannot use Red, Green) Least constraining value Assume we have chosen variable C Red is the least constraining valid color for the future

Search for the optimal configuration Objective: find the optimal configuration Optimality: Defined by some quality measure Quality measure reflects our preference towards each configuration (or state) Example: Traveling salesman problem Problem: A graph with distances A F B E D C Goal: find the shortest tour which visits every city once and returns to the start An example of a valid tour: ABCDEF

Example: N queens A CSP problem can be converted to the optimal configuration problem The quality of a configuration in a CSP = the number of constraints violated Solving: minimize the number of constraint violations # of violations =3 # of violations =1 # of violations =0 Iterative optimization methods Searching systematically for the best configuration with the DFS may not be the best solution Worst case running time: Exponential in the number of variables Solutions to large optimal configuration problems are often found using iterative optimization methods Methods: Hill climbing Simulated Annealing Genetic algorithms

Properties: Iterative optimization methods Search the space of complete configurations Take advantage of local moves Operators make local changes to complete configurations Keep track of just one state (the current state) no memory of past states!!! No search tree is necessary!!! Example: N-queens Local operators for generating the next state: Select a variable (a queen) Reallocate its position

Example: Traveling salesman problem Local operator for generating the next state: divide the existing tour into two parts, reconnect the two parts in the opposite order Example: A ABCDEF ABCD EF F Part 1 E D B Part ABCDFE C Example: Traveling salesman problem Local operator: generates the next configuration (state) A A F B F B E D E D C C C

Searching the configuration space Search algorithms keep only one configuration (the current configuration) F E D C A E D Problem: How to decide about which operator to apply? B? F F E D C A C A B B Search algorithms Two strategies to choose the configuration (state) to be visited next: Hill climbing Simulated annealing Later: Extensions to multiple current states: Genetic algorithms Note: Maximization is inverse of the minimization min f ( X ) max [ f ( X )]

Hill climbing Look around at states in the local neighborhood and choose the one with the best value Assume: we want to maximize the value Worse Better states Hill climbing Always choose the next best successor state Stop when no improvement possible

Hill climbing Look around at states in the local neighborhood and choose the one with the best value value Worse Better What can go wrong? states Hill climbing Hill climbing can get trapped in the local optimum value No more local improvement Better What can go wrong? states

Hill climbing Hill climbing can get clueless on plateaus value? plateau states Hill climbing and n-queens The quality of a configuration is given by the number of constraints violated Then: Hill climbing reduces the number of constraints Min-conflict strategy (heuristic): Choose randomly a variable with conflicts Choose its value such that it violates the fewest constraints Success!! But not always!!! The local optima problem!!!

Simulated annealing Permits bad moves to states with lower value, thus escape the local optima Gradually decreases the frequency of such moves and their size (parameter controlling it temperature) value Sometimes down Always up states Simulated annealing algorithm The probability of making a move: The probability of moving into a state with a higher value is 1 The probability of moving into a state with a lower value is p( Accept NEXT ) E = E NEXT E E / T = e where CURRENT The probability is: Proportional to the energy difference

Simulated annealing algorithm Current configuration Energy E =15 Energy E =167 Energy E =180 Energy E =191 Simulated annealing algorithm Current configuration Energy E =15 E = E NEXT E CURRENT = 15 167 = - p( Accept ) = e E / T = e / T Sometimes accept! Energy E =167 Energy E =180 Energy E =191

Simulated annealing algorithm Current configuration Energy E =167 Energy E =15 Energy E =180 E = E NEXT E CURRENT = 180 167 > 0 p( Accept ) = 1 Always accept! Energy E =191 Simulated annealing algorithm The probability of moving into a state with a lower value is p( Accept ) = e E / T where E = E NEXT E CURRENT The probability is: Modulated through a temperature parameter T: for T the probability of any move approaches 1 for T 0 the probability that a state with smaller value is selected goes down and approaches 0 Cooling schedule: Schedule of changes of a parameter T over iteration steps

Simulated annealing Simulated annealing algorithm Simulated annealing algorithm developed originally for modeling physical processes (Metropolis et al, 53) Properties: If T is decreased slowly enough the best configuration (state) is always reached Applications: VLSI design airline scheduling

Simulated evolution and genetic algorithms Limitations of simulated annealing: Pursues one state configuration at the time; Changes to configurations are typically local Can we do better? Assume we have two configurations with good values that are quite different We expect that the combination of the two individual configurations may lead to a configuration with higher value (Not guaranteed!!!) This is the idea behind genetic algorithms in which we grow a population of individual combinations