Chapter 4 Beyond Classical Search 4.1 Local search algorithms and optimization problems

Similar documents
CS 380: ARTIFICIAL INTELLIGENCE

Local search algorithms. Chapter 4, Sections 3 4 1

22c:145 Artificial Intelligence

Summary. AIMA sections 4.3,4.4. Hill-climbing Simulated annealing Genetic algorithms (briey) Local search in continuous spaces (very briey)

Local search algorithms. Chapter 4, Sections 3 4 1

Iterative improvement algorithms. Beyond Classical Search. Outline. Example: Traveling Salesperson Problem

School of EECS Washington State University. Artificial Intelligence

New rubric: AI in the news

Local Search & Optimization

Local Search & Optimization

Local Search and Optimization

A.I.: Beyond Classical Search

Local search and agents

Local and Online search algorithms

Local search algorithms

CS 331: Artificial Intelligence Local Search 1. Tough real-world problems

LOCAL SEARCH. Today. Reading AIMA Chapter , Goals Local search algorithms. Introduce adversarial search 1/31/14

Pengju

Scaling Up. So far, we have considered methods that systematically explore the full search space, possibly using principled pruning (A* etc.).

Methods for finding optimal configurations

Local Beam Search. CS 331: Artificial Intelligence Local Search II. Local Beam Search Example. Local Beam Search Example. Local Beam Search Example

Beyond Classical Search

Finding optimal configurations ( combinatorial optimization)

Local and Stochastic Search

Local search algorithms

CSC242: Artificial Intelligence. Lecture 4 Local Search

Methods for finding optimal configurations

PROBLEM SOLVING AND SEARCH IN ARTIFICIAL INTELLIGENCE

The Story So Far... The central problem of this course: Smartness( X ) arg max X. Possibly with some constraints on X.

Local Search (Greedy Descent): Maintain an assignment of a value to each variable. Repeat:

Chapter 6 Constraint Satisfaction Problems

Simulated Annealing. Local Search. Cost function. Solution space

CSC242: Intro to AI. Lecture 5. Tuesday, February 26, 13

CSC 4510 Machine Learning

Optimization. Announcements. Last Time. Searching: So Far. Complete Searching. Optimization

Introduction to Simulated Annealing 22c:145

Artificial Intelligence (Künstliche Intelligenz 1) Part II: Problem Solving

Sections 18.6 and 18.7 Analysis of Artificial Neural Networks

Optimization Methods via Simulation

Lecture 9: Search 8. Victor R. Lesser. CMPSCI 683 Fall 2010

Random Search. Shin Yoo CS454, Autumn 2017, School of Computing, KAIST

Incremental Stochastic Gradient Descent

Sections 18.6 and 18.7 Artificial Neural Networks

Sections 18.6 and 18.7 Artificial Neural Networks

Constraint Satisfaction Problems

Local Search. Shin Yoo CS492D, Fall 2015, School of Computing, KAIST

Artificial Intelligence Heuristic Search Methods

Markov Chain Monte Carlo. Simulated Annealing.

Lecture 9 Evolutionary Computation: Genetic algorithms

Motivation, Basic Concepts, Basic Methods, Travelling Salesperson Problem (TSP), Algorithms

5. Simulated Annealing 5.1 Basic Concepts. Fall 2010 Instructor: Dr. Masoud Yaghini

CS 781 Lecture 9 March 10, 2011 Topics: Local Search and Optimization Metropolis Algorithm Greedy Optimization Hopfield Networks Max Cut Problem Nash

Lecture 15: Genetic Algorithms

Evolutionary Computation: introduction

Artificial Intelligence

Constraint satisfaction search. Combinatorial optimization search.

Crossover Techniques in GAs

( ) ( ) ( ) ( ) Simulated Annealing. Introduction. Pseudotemperature, Free Energy and Entropy. A Short Detour into Statistical Mechanics.

(tree searching technique) (Boolean formulas) satisfying assignment: (X 1, X 2 )

Single Solution-based Metaheuristics

SIMU L TED ATED ANNEA L NG ING

Computational statistics

Unit 1A: Computational Complexity

Metaheuristics. 2.3 Local Search 2.4 Simulated annealing. Adrian Horga

Chapter 5 Extension of Fuzzy Partition Technique in Three-Level Thresholding

Last time: Summary. Last time: Summary

Chapter 13 Uncertainty

12. LOCAL SEARCH. gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria

Design and Analysis of Algorithms

AI Programming CS S-06 Heuristic Search

Heuristic Optimisation

Fundamentals of Metaheuristics

Genetic Algorithms. Donald Richards Penn State University

UCSD CSE 21, Spring 2014 [Section B00] Mathematics for Algorithm and System Analysis

6 Markov Chain Monte Carlo (MCMC)

Artificial Intelligence Methods (G5BAIM) - Examination

CS:4420 Artificial Intelligence

1 Heuristics for the Traveling Salesman Problem

HILL-CLIMBING JEFFREY L. POPYACK

CHAPTER 4 INTRODUCTION TO DISCRETE VARIABLE OPTIMIZATION

12. LOCAL SEARCH. gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria

Zebo Peng Embedded Systems Laboratory IDA, Linköping University

7.1 Basis for Boltzmann machine. 7. Boltzmann machines

Complexity Theory Part II

Stochastic Search: Part 2. Genetic Algorithms. Vincent A. Cicirello. Robotics Institute. Carnegie Mellon University

Intelligens Számítási Módszerek Genetikus algoritmusok, gradiens mentes optimálási módszerek

Improving Repair-based Constraint Satisfaction Methods by Value Propagation

22c:145 Artificial Intelligence

Fall 2003 BMI 226 / CS 426 LIMITED VALIDITY STRUCTURES

Data Mining Part 5. Prediction

NP and Computational Intractability

IS-ZC444: ARTIFICIAL INTELLIGENCE

Metaheuristics and Local Search

AI Programming CS F-20 Neural Networks

Algorithms and Complexity theory

Friday Four Square! Today at 4:15PM, Outside Gates

Chapter 8: Introduction to Evolutionary Computation

Lin-Kernighan Heuristic. Simulated Annealing

Overview. Optimization. Easy optimization problems. Monte Carlo for Optimization. 1. Survey MC ideas for optimization: (a) Multistart

Havrda and Charvat Entropy Based Genetic Algorithm for Traveling Salesman Problem

Transcription:

Chapter 4 Beyond Classical Search 4.1 Local search algorithms and optimization problems CS4811 - Artificial Intelligence Nilufer Onder Department of Computer Science Michigan Technological University

Outline Hill climbing search Simulated annealing Local beam search Genetic algorithms

Iterative improvement algorithms In the problems we studied so far, the solution is the path. For example, the solution to the 8-puzzle is a series of movements for the blank tile. The solution to the traveling in Romania problem is a sequence of cities to get to Bucharest. In many optimization problems, the path is irrelevant. The goal itself is the solution. The state space is set up as a set of complete configurations, the optimal configuration is one of them. An iterative improvement algorithm keeps a single current state and tries to improve it. The space complexity is constant!

Example: Travelling Salesperson Problem Start with any complete tour, perform pairwise exchanges B C B C D D A E A E A C B D E A B C D E Variants of this approach get within 1% of optimal very quickly with thousands of cities.

Example: n-queens Put n queens on an n n board with no two queens on the same row, column, or diagonal. Move a queen to reduce the number of conflicts (h). Almost always solves n-queens problems almost instantaneously for very large n, e.g., n = 1 million.

Example: n-queens (cont d) (a) shows the value of h for each possible successor obtained by moving a queen within its column. The marked squares show the best moves. (b) shows a local minimum: the state has h = 1 but every successor has higher cost.

Hill-climbing (or gradient ascent/descent) function Hill-Climbing (problem) returns a state that is a local maximum inputs: problem, a problem local variables: current, a node neighbor, a node current Make-Node(problem.Initial-State) loop do neighbor a highest-valued successor of current if neighbor.value current.value then return current.state current neighbor

Hill-climbing (cont d) Like climbing Everest in thick fog with amnesia. Problem: depending on initial state, can get stuck on local maxima In continuous spaces, problems with choosing step size, slow convergence objective function global maximum shoulder local maximum "flat" local maximum current state state space

Difficulties with ridges The ridge creates a sequence of local maxima that are not directly connected to each other. From each local maximum, all the available actions point downhill.

Hill-climbing techniques stochastic: choose randomly from uphill moves first-choice: generate successors randomly one-by-one until one better than the current state is found random-restart: restart with a randomly generated initial state

Simulated annealing function Simulated Annealing (problem, schedule) returns a solution state inputs: problem, a problem schedule, a mapping from time to temperature local variables: current, a node next, a node current Make-Node(problem.Initial-State) for t = 1 to do T schedule(t) if T=0 then return current next a randomly selected successor of current E next.value - current.value if E > 0 then current next else current next only with probability e E/T

Simulated annealing (cont d) Idea: escape local maxima by allowing some bad moves but gradually decrease their size and frequency. Devised by Metropolis et al., 1953, for physical process modelling. At fixed temperature T, state occupation probability reaches Boltzman distribution p(x) = αe E(x) kt When T is decreased slowly enough it always reaches the best state x because e E(x ) kt /e E(x) kt = e E(x ) E(x) kt 1 for small T. (Is this necessarily an interesting guarantee?) Widely used in VLSI layout, airline scheduling, etc.

Local beam search Idea: keep k states instead of 1; choose top k of all their successors Not the same as k searches run in parallel! Searches that find good states recruit other searches to join them. Problem: quite often, all k states end up on same local hill. To improve: choose k successors randomly, biased towards good ones. Observe the close analogy to natural selection!

The genetic algorithm function Genetic Algorithm (problem, Fitness-Fn) returns an individual inputs: population, a set of individuals Fitness-Fn, a function that measures the fitness of an individual repeat new-population empty set for i = 1 to Size(population) do x Random-Selection(population, Fitness-Fn) y Random-Selection(population, Fitness-Fn) child Reproduce(x,y) if (small random probability) then child Mutate(child) add child to new-population population new-population until some individual is fit enough, or enough time has elapsed return the best individual in population, according to Fitness-Fn

The crossover function function Reproduce (x,y) returns an individual inputs: x,y, parent individuals n Length(x) c random number from 1 to n return Append(Substring(x, 1, c), Substring(y, c + 1, n))

Genetic algorithms (GAs) Idea: stochastic local beam search + generate successors from pairs of states GAs require states encoded as strings. Crossover helps iff substrings are meaningful components. GAs evolution. e.g., real genes encode replication machinery.

Genetic algorithm example

The genetic algorithm with the 8-queens problem

Summary Hill climbing is a steady monotonous ascent to better nodes. Simulated annealing, local beam search, and genetic algorithms are random searches with a bias towards better nodes. All need very little space which is defined by the population size. None guarantees to find the globally optimal solution.

Sources for the slides AIMA textbook (3 rd edition) AIMA slides (http://aima.cs.berkeley.edu/)