Summary. AIMA sections 4.3,4.4. Hill-climbing Simulated annealing Genetic algorithms (briey) Local search in continuous spaces (very briey)

Similar documents
Local search algorithms. Chapter 4, Sections 3 4 1

CS 380: ARTIFICIAL INTELLIGENCE

Local search algorithms. Chapter 4, Sections 3 4 1

22c:145 Artificial Intelligence

Iterative improvement algorithms. Beyond Classical Search. Outline. Example: Traveling Salesperson Problem

Chapter 4 Beyond Classical Search 4.1 Local search algorithms and optimization problems

New rubric: AI in the news

A.I.: Beyond Classical Search

Local Search and Optimization

Local and Online search algorithms

Local Search & Optimization

Local Search & Optimization

Beyond Classical Search

Pengju

Introduction to Simulated Annealing 22c:145

Scaling Up. So far, we have considered methods that systematically explore the full search space, possibly using principled pruning (A* etc.).

Local search and agents

LOCAL SEARCH. Today. Reading AIMA Chapter , Goals Local search algorithms. Introduce adversarial search 1/31/14

CS 331: Artificial Intelligence Local Search 1. Tough real-world problems

Finding optimal configurations ( combinatorial optimization)

Methods for finding optimal configurations

Local search algorithms

Local and Stochastic Search

Summary. Local Search and cycle cut set. Local Search Strategies for CSP. Greedy Local Search. Local Search. and Cycle Cut Set. Strategies for CSP

Single Solution-based Metaheuristics

School of EECS Washington State University. Artificial Intelligence

CSC242: Artificial Intelligence. Lecture 4 Local Search

Methods for finding optimal configurations

CSC242: Intro to AI. Lecture 5. Tuesday, February 26, 13

Artificial Intelligence (Künstliche Intelligenz 1) Part II: Problem Solving

Simulated Annealing. Local Search. Cost function. Solution space

The Story So Far... The central problem of this course: Smartness( X ) arg max X. Possibly with some constraints on X.

Local Beam Search. CS 331: Artificial Intelligence Local Search II. Local Beam Search Example. Local Beam Search Example. Local Beam Search Example

PROBLEM SOLVING AND SEARCH IN ARTIFICIAL INTELLIGENCE

Artificial Intelligence Methods (G5BAIM) - Examination

Local search algorithms

Artificial Intelligence Heuristic Search Methods

Optimization. Announcements. Last Time. Searching: So Far. Complete Searching. Optimization

Motivation, Basic Concepts, Basic Methods, Travelling Salesperson Problem (TSP), Algorithms

Local Search (Greedy Descent): Maintain an assignment of a value to each variable. Repeat:

Incremental Stochastic Gradient Descent

CSC 4510 Machine Learning

Markov Chain Monte Carlo. Simulated Annealing.

Stochastic Search: Part 2. Genetic Algorithms. Vincent A. Cicirello. Robotics Institute. Carnegie Mellon University

Optimization Methods via Simulation

5. Simulated Annealing 5.1 Basic Concepts. Fall 2010 Instructor: Dr. Masoud Yaghini

( ) ( ) ( ) ( ) Simulated Annealing. Introduction. Pseudotemperature, Free Energy and Entropy. A Short Detour into Statistical Mechanics.

Last time: Summary. Last time: Summary

Evolutionary computation

Metaheuristics. 2.3 Local Search 2.4 Simulated annealing. Adrian Horga

Artificial Intelligence

Unit 1A: Computational Complexity

Constraint satisfaction search. Combinatorial optimization search.

Random Search. Shin Yoo CS454, Autumn 2017, School of Computing, KAIST

1 Heuristics for the Traveling Salesman Problem

Lecture 15: Genetic Algorithms

Lin-Kernighan Heuristic. Simulated Annealing

CS 781 Lecture 9 March 10, 2011 Topics: Local Search and Optimization Metropolis Algorithm Greedy Optimization Hopfield Networks Max Cut Problem Nash

7.1 Basis for Boltzmann machine. 7. Boltzmann machines

Algorithms and Complexity theory

Fall 2003 BMI 226 / CS 426 LIMITED VALIDITY STRUCTURES

P and NP. Inge Li Gørtz. Thank you to Kevin Wayne, Philip Bille and Paul Fischer for inspiration to slides

6 Markov Chain Monte Carlo (MCMC)

Improved methods for the Travelling Salesperson with Hotel Selection

A6523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring

Constraint Satisfaction Problems

Evolutionary Computation. DEIS-Cesena Alma Mater Studiorum Università di Bologna Cesena (Italia)

Who Has Heard of This Problem? Courtesy: Jeremy Kun

IS-ZC444: ARTIFICIAL INTELLIGENCE

Ajit Narayanan and Mark Moore. University of Exeter

Zebo Peng Embedded Systems Laboratory IDA, Linköping University

CHAPTER 4 INTRODUCTION TO DISCRETE VARIABLE OPTIMIZATION

Lecture H2. Heuristic Methods: Iterated Local Search, Simulated Annealing and Tabu Search. Saeed Bastani

Evolutionary Computation

UCSD CSE 21, Spring 2014 [Section B00] Mathematics for Algorithm and System Analysis

Local Search. Shin Yoo CS492D, Fall 2015, School of Computing, KAIST

SIMU L TED ATED ANNEA L NG ING

Computational statistics

Learning Tetris. 1 Tetris. February 3, 2009

Analysis of Algorithms. Unit 5 - Intractable Problems

ARTIFICIAL INTELLIGENCE

Crossover Techniques in GAs

Sensitive Ant Model for Combinatorial Optimization

NP and Computational Intractability

12. LOCAL SEARCH. gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria

Heuristic Optimisation

Genetic Algorithms & Modeling

References. Markov-Chain Monte Carlo. Recall: Sampling Motivation. Problem. Recall: Sampling Methods. CSE586 Computer Vision II

Optimisation and Operations Research

Metaheuristics and Local Search

Overview. Optimization. Easy optimization problems. Monte Carlo for Optimization. 1. Survey MC ideas for optimization: (a) Multistart

Bounded Approximation Algorithms

Evolutionary Computation: introduction

Markov-Chain Monte Carlo

Genetic Algorithm. Outline

Fundamentals of Metaheuristics

CS/COE

Markov Chains and MCMC

Polynomial-Time Reductions

CSE373: Data Structures and Algorithms Lecture 2: Math Review; Algorithm Analysis. Hunter Zahn Summer 2016

Transcription:

AIMA sections 4.3,4.4 Summary Hill-climbing Simulated annealing Genetic (briey) in continuous spaces (very briey)

Iterative improvement In many optimization problems, path is irrelevant; the goal state itself is the solution Then state space = set of complete congurations; nd optimal conguration, e.g., TSP, etc. or, nd conguration satisfying constraints, e.g., n-queens In such cases, can use iterative improvement ; keep a single current state, try to improve it Constant space, suitable for online as well as oine search Example: Travelling Salesperson Problem Start with any complete tour, perform pairwise exchanges Variants of this approach get within 1% of optimal very quickly with thousands of cities

Example: n-queens Put n queens on an n n board with no two queens on the same, row, column, or diagonal Move a queen to reduce number of conicts Almost always solves n-queens problems almost instantaneously for very large n, e.g., n = 1million Hill-climbing (or gradient ascent/descent) Like climbing Everest in thick fog with amnesia function Hill-Climbing( problem) returns a state that is a local maximum inputs: problem, a problem local variables: current, a node neighbor, a node current Make-Node(Initial-State[problem]) loop do end neighbor a highest-valued successor of current if Value[neighbor] Value[current] then return State[current] current neighbor

Hill-climbing contd. Useful to consider state space landscape Random-restart hill climbing overcomes local maximatrivially complete Random sideways moves escape from shoulders loop on at maxima Simulated Annealing Inspired by statistical mechanics Idea: escape local maxima by allowing some bad moves, but gradually decrease their frequency Allow more random moves at the beginning we can reach zones with better solutions Diminish probability of having a random move towards the end rene search around a good solution

Simulated annealing (pseudo-code) function Simulated-Annealing( problem, schedule) returns a solution state inputs: problem, a problem schedule, a mapping from time to temperature local variables: current, a node next, a node T, a temperature controlling prob. of downward steps current Make-Node(Initial-State[problem]) for t 1 to do T schedule[t] if T = 0 then return current next a randomly selected successor of current E Value[next] Value[current] if E > 0 then current next else current next only with probability e E/T Properties of simulated annealing At xed temperature T, state occupation probability reaches Boltzman distribution p(x) = αe E(x) kt T decreased slowly enough = always reach best state x 1 for small T Is this necessarily an interesting guarantee?? Devised by Metropolis et al., 1953, for physical process modelling Widely used in VLSI layout, airline scheduling, etc. because e E(x ) kt /e E(x) kt = e E(x ) E(x) kt

Local beam search Idea: keep k states instead of 1; choose top k of all their successors Not the same as k searches run in parallel! Searches that nd good states recruit other searches to join them Problem: quite often, all k states end up on same local hill Idea: choose k successors randomly, biased towards good ones Observe the close analogy to natural selection! Genetic = stochastic local beam search + generate successors from pairs of states

Genetic contd. GAs require states encoded as strings (GPs use programs) Crossover helps i substrings are meaningful components GAs evolution: e.g., real genes encode replication machinery! Continuous state spaces Suppose we want to site three airports in Romania: 6-D state space dened by (x 1, y 1 ), (x 2, y 2 ), (x 3, y 3 ) objective function f (x 1, y 1, x 2, y 2, x 3, y 3 ) = sum of squared distances from each city to nearest airport Discretization methods turn continuous space into discrete space, e.g., empirical gradient considers ±δ change in each coordinate Gradient methods compute ( ) f f =, f, f, f, f, f x 1 y 1 x 2 y 2 x 3 y 3 to increase/reduce f, e.g., by x x + α f (x) Sometimes can solve for f (x) = 0 exactly (e.g., with one city). NewtonRaphson (1664, 1690) iterates x x H 1 f (x) f (x) to solve f (x) = 0, where H ij = 2 f / x i x j

Exercise: Local Search for the 4-Queens problem Consider the 4-Queens problem. Assume the evaluation function is the number of pairs of queens that attack each other. Assume initial state is (1234) What is the current score for the initial state Write down the values of all successor states for this initial state Implement a simple program that computes the next best state(s) for your hill-climbing approach Trace a possible execution of a (deterministic) hill-climbing approach Comment on optimality of nal state sol: eserc1localsearch.m (http://profs.sci.univr.it/~farinelli/courses/ ia/code/) Exercise: Local Beam Search for the 4-Queens problem Consider the 4-Queens problem and the deterministic hill clombing approach described above. Assume k = 3 and initial states are: (1234), (2222), (3333). Trace execution of a parallel search sol: eserc2parallel.m (http://profs.sci.univr.it/~farinelli/courses/ ia/code/) Trace the execution of a beam search sol: eserc2beamsearch.m (http://profs.sci.univr.it/~farinelli/courses/ ia/code/) Consider these initial states: (3333), (1234), (2222), trace the beam search.