Local Search (Greedy Descent): Maintain an assignment of a value to each variable. Repeat:

Similar documents
Announcements. Solution to Assignment 3 is posted Assignment 4 is available. c D. Poole and A. Mackworth 2017 CPSC 322 Lecture 8 1 / 25

Local Search & Optimization

Local Search & Optimization

A.I.: Beyond Classical Search

LOCAL SEARCH. Today. Reading AIMA Chapter , Goals Local search algorithms. Introduce adversarial search 1/31/14

22c:145 Artificial Intelligence

Local Search and Optimization

School of EECS Washington State University. Artificial Intelligence

Local and Stochastic Search

Methods for finding optimal configurations

Scaling Up. So far, we have considered methods that systematically explore the full search space, possibly using principled pruning (A* etc.).

CS 380: ARTIFICIAL INTELLIGENCE

The Story So Far... The central problem of this course: Smartness( X ) arg max X. Possibly with some constraints on X.

PROBLEM SOLVING AND SEARCH IN ARTIFICIAL INTELLIGENCE

CS 331: Artificial Intelligence Local Search 1. Tough real-world problems

Local Beam Search. CS 331: Artificial Intelligence Local Search II. Local Beam Search Example. Local Beam Search Example. Local Beam Search Example

Metaheuristics and Local Search

Metaheuristics and Local Search. Discrete optimization problems. Solution approaches

Using Evolutionary Techniques to Hunt for Snakes and Coils

Chapter 4 Beyond Classical Search 4.1 Local search algorithms and optimization problems

Lecture 9 Evolutionary Computation: Genetic algorithms

New rubric: AI in the news

Search. Search is a key component of intelligent problem solving. Get closer to the goal if time is not enough

Local search algorithms. Chapter 4, Sections 3 4 1

Local and Online search algorithms

CSC242: Artificial Intelligence. Lecture 4 Local Search

Optimization Methods via Simulation

Chapter 8: Introduction to Evolutionary Computation

Pengju

Artificial Intelligence Heuristic Search Methods

Summary. Local Search and cycle cut set. Local Search Strategies for CSP. Greedy Local Search. Local Search. and Cycle Cut Set. Strategies for CSP

Propositions. c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 5.1, Page 1

CSC 4510 Machine Learning

CSC242: Intro to AI. Lecture 5. Tuesday, February 26, 13

Genetic Algorithms & Modeling

Beyond Classical Search

Local search algorithms. Chapter 4, Sections 3 4 1

Local search and agents

Local search algorithms

Methods for finding optimal configurations

Zebo Peng Embedded Systems Laboratory IDA, Linköping University

Computational statistics

Bounded Approximation Algorithms

Lecture 9: Search 8. Victor R. Lesser. CMPSCI 683 Fall 2010

CPSC 340: Machine Learning and Data Mining. Stochastic Gradient Fall 2017

Fundamentals of Genetic Algorithms

Summary. AIMA sections 4.3,4.4. Hill-climbing Simulated annealing Genetic algorithms (briey) Local search in continuous spaces (very briey)

Stochastic Simulation

Finding optimal configurations ( combinatorial optimization)

COMP538: Introduction to Bayesian Networks

Iterative improvement algorithms. Beyond Classical Search. Outline. Example: Traveling Salesperson Problem

Stochastic Search: Part 2. Genetic Algorithms. Vincent A. Cicirello. Robotics Institute. Carnegie Mellon University

Lecture 15: Genetic Algorithms

Artificial Intelligence

MASSACHUSETTS INSTITUTE OF TECHNOLOGY /ESD.77 MSDO /ESD.77J Multidisciplinary System Design Optimization (MSDO) Spring 2010.

Ant Colony Optimization: an introduction. Daniel Chivilikhin

Optimization. Announcements. Last Time. Searching: So Far. Complete Searching. Optimization

Genetic Algorithms: Basic Principles and Applications

Genetic Algorithms. Seth Bacon. 4/25/2005 Seth Bacon 1

Lecture 22. Introduction to Genetic Algorithms

Biology 11 UNIT 1: EVOLUTION LESSON 2: HOW EVOLUTION?? (MICRO-EVOLUTION AND POPULATIONS)

Koza s Algorithm. Choose a set of possible functions and terminals for the program.

Research Article A Novel Differential Evolution Invasive Weed Optimization Algorithm for Solving Nonlinear Equations Systems

CSE 417T: Introduction to Machine Learning. Final Review. Henry Chai 12/4/18

COMP 551 Applied Machine Learning Lecture 14: Neural Networks

Data Warehousing & Data Mining

Optimization and Gradient Descent

Foundations of Artificial Intelligence

Adaptive Generalized Crowding for Genetic Algorithms

APES C4L2 HOW DOES THE EARTH S LIFE CHANGE OVER TIME? Textbook pages 85 88

Computer simulation can be thought as a virtual experiment. There are

Is the test error unbiased for these programs? 2017 Kevin Jamieson

Constraint Satisfaction Problems

Learning Objectives. c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 7.2, Page 1

LandscapeEC: Adding Geographical Structure to Cellular Evolutionary Algorithms

CS 188: Artificial Intelligence Spring 2007

CORRECTNESS OF A GOSSIP BASED MEMBERSHIP PROTOCOL BY (ANDRÉ ALLAVENA, ALAN DEMERS, JOHN E. HOPCROFT ) PRATIK TIMALSENA UNIVERSITY OF OSLO

ARTIFICIAL INTELLIGENCE

Sampling from Bayes Nets

Structure Learning: the good, the bad, the ugly

Linear Regression (continued)

Decomposition and Metaoptimization of Mutation Operator in Differential Evolution

Final exam of ECE 457 Applied Artificial Intelligence for the Spring term 2007.

Name: Period Study Guide 17-1 and 17-2

Modern WalkSAT algorithms

Neural Nets Supervised learning

Lecture 15: MCMC Sanjeev Arora Elad Hazan. COS 402 Machine Learning and Artificial Intelligence Fall 2016

CSE 473: Artificial Intelligence Spring 2014

Stochastic Simulation

Q-learning. Tambet Matiisen

It all depends on barriers that prevent members of two species from producing viable, fertile hybrids.

INVARIANT SUBSETS OF THE SEARCH SPACE AND THE UNIVERSALITY OF A GENERALIZED GENETIC ALGORITHM

EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS

Chapter 5 Extension of Fuzzy Partition Technique in Three-Level Thresholding

So in terms of conditional probability densities, we have by differentiating this relationship:

Machine Learning CSE546 Carlos Guestrin University of Washington. October 7, Efficiency: If size(w) = 100B, each prediction is expensive:

Evolutionary computation

Motivation, Basic Concepts, Basic Methods, Travelling Salesperson Problem (TSP), Algorithms

Stochastic Local Search

Improving Repair-based Constraint Satisfaction Methods by Value Propagation

Transcription:

Local Search Local Search (Greedy Descent): Maintain an assignment of a value to each variable. Repeat: I I Select a variable to change Select a new value for that variable Until a satisfying assignment is found c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 1

Local Search for CSPs Aim: find an assignment with zero unsatisfied constraints. Given an assignment of a value to each variable, a conflict is an unsatisfied constraint. The goal is an assignment with zero conflicts. Heuristic function to be minimized: the number of conflicts. c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 2

Greedy Descent Variants To choose a variable to change and a new value for it: Find a variable-value pair that minimizes the number of conflicts Select a variable that participates in the most conflicts. Select a value that minimizes the number of conflicts. Select a variable that appears in any conflict. Select a value that minimizes the number of conflicts. Select a variable at random. Select a value that minimizes the number of conflicts. Select a variable and value at random; accept this change if it doesn t increase the number of conflicts. c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 3

Complex Domains When the domains are small or unordered, the neighbors of an assignment can correspond to choosing another value for one of the variables. When the domains are large and ordered, the neighbors of an assignment are the adjacent values for one of the variables. If the domains are continuous, Gradient descent changes each variable proportional to the gradient of the heuristic function in that direction. The value of variable X i goes from v i to v i @h @X i. is the step size. c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 5

Problems with Greedy Descent alocalminimumthatis not a global minimum aplateauwherethe heuristic values are uninformative a ridge is a local minimum where n-step look-ahead might help Ridge Plateau Local Minimum c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 6

Randomized Algorithms Consider two methods to find a minimum value: I Greedy descent, starting from some position, keep moving down & report minimum value found I Pick values at random & report minimum value found Which do you expect to work better to find a global minimum? Can a mix work better? c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 7

Randomized Greedy Descent As well as downward steps we can allow for: Random steps: move to a random neighbor. Random restart: reassign random values to all variables. Which is more expensive computationally? c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 8

1-Dimensional Ordered Examples Two 1-dimensional search spaces; step right or left: (a) (b) Which method would most easily find the global minimum? What happens in hundreds or thousands of dimensions? What if di erent parts of the search space have di erent structure? c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 9

Stochastic Local Search Stochastic local search is a mix of: Greedy descent: move to a lowest neighbor Random walk: taking some random steps Random restart: reassigning values to all variables c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 10

Random Walk Variants of random walk: When choosing the best variable-value pair, randomly sometimes choose a random variable-value pair. When selecting a variable then a value: I Sometimes choose any variable that participates in the most conflicts. I Sometimes choose any variable that participates in any conflict (a red node). I Sometimes choose any variable. Sometimes choose the best value and sometimes choose a random value. c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 11

Comparing Stochastic Algorithms How can you compare three algorithms when I one solves the problem 30% of the time very quickly but doesn t halt for the other 70% of the cases I one solves 60% of the cases reasonably quickly but doesn t solve the rest I one solves the problem in 100% of the cases, but slowly? c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 12

Comparing Stochastic Algorithms How can you compare three algorithms when I one solves the problem 30% of the time very quickly but doesn t halt for the other 70% of the cases I one solves 60% of the cases reasonably quickly but doesn t solve the rest I one solves the problem in 100% of the cases, but slowly? Summary statistics, such as mean run time, median run time, and mode run time don t make much sense. c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 13

Runtime Distribution Plots runtime (or number of steps) and the proportion (or number) of the runs that are solved within that runtime. 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 1 10 100 1000 c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 14

Variant: Simulated Annealing Pick a variable at random and a new value at random. If it is an improvement, adopt it. If it isn t an improvement, adopt it probabilistically depending on a temperature parameter, T. I With current assignment n and proposed assignment n 0 we move to n 0 with probability e (h(n0 ) h(n))/t Temperature can be reduced. c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 15

Variant: Simulated Annealing Pick a variable at random and a new value at random. If it is an improvement, adopt it. If it isn t an improvement, adopt it probabilistically depending on a temperature parameter, T. I With current assignment n and proposed assignment n 0 we move to n 0 with probability e (h(n0 ) h(n))/t Temperature can be reduced. Probability of accepting a change: Temperature 1-worse 2-worse 3-worse 10 0.91 0.81 0.74 1 0.37 0.14 0.05 0.25 0.02 0.0003 0.000006 0.1 0.00005 2 10 9 9 10 14 c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 16

Tabu lists To prevent cycling we can maintain a tabu list of the k last assignments. Don t allow an assignment that is already on the tabu list. If k = 1, we don t allow an assignment of to the same value to the variable chosen. We can implement it more e complete assignments. It can be expensive if k is large. ciently than as a list of c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 17

Parallel Search A total assignment is called an individual. Idea: maintain a population of k individuals instead of one. At every stage, update each individual in the population. Whenever an individual is a solution, it can be reported. Like k restarts, but uses k times the minimum number of steps. c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 18

Beam Search Like parallel search, with k individuals, but choose the k best out of all of the neighbors. When k = 1, it is greedy descent. When k = 1, itisbreadth-firstsearch. The value of k lets us limit space and parallelism. c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 19

Stochastic Beam Search Like beam search, but it probabilistically chooses the k individuals at the next generation. The probability that a neighbor is chosen is proportional to its heuristic value. This maintains diversity amongst the individuals. The heuristic value reflects the fitness of the individual. Like asexual reproduction: each individual mutates and the fittest ones survive. c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 20

Genetic Algorithms Like stochastic beam search, but pairs of individuals are combined to create the o spring: For each generation: I Randomly choose pairs of individuals where the fittest individuals are more likely to be chosen. I For each pair, perform a cross-over: form two o spring each taking di erent parts of their parents: I Mutate some values. Stop when a solution is found. c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 21

Crossover Given two individuals: X 1 = a 1, X 2 = a 2,...,X m = a m X 1 = b 1, X 2 = b 2,...,X m = b m Select i at random. Form two o spring: X 1 = a 1,...,X i = a i, X i+1 = b i+1,...,x m = b m X 1 = b 1,...,X i = b i, X i+1 = a i+1,...,x m = a m The e ectiveness depends on the ordering of the variables. Many variations are possible. c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 4.2, Page 22