Improving Repair-based Constraint Satisfaction Methods by Value Propagation

Similar documents
Artificial Intelligence

Lecture 9: Search 8. Victor R. Lesser. CMPSCI 683 Fall 2010

Foundations of Artificial Intelligence

Foundations of Artificial Intelligence

&hod I: Escaping F!rom Escal inima

Local and Stochastic Search

Probabilistic Analysis of Local Search and NP-Completeness Result for Constraint Satisfaction

Logic in AI Chapter 7. Mausam (Based on slides of Dan Weld, Stuart Russell, Dieter Fox, Henry Kautz )

Chapter 7 Propositional Satisfiability Techniques

Constraint Satisfaction Problems

PROBLEM SOLVING AND SEARCH IN ARTIFICIAL INTELLIGENCE

Finding optimal configurations ( combinatorial optimization)

Stochastic Local Search

Methods for finding optimal configurations

Introduction to Arti Intelligence

Relaxed Random Search for Solving K-Satisfiability and its Information Theoretic Interpretation

Chapter 7 Propositional Satisfiability Techniques

Automated Program Verification and Testing 15414/15614 Fall 2016 Lecture 3: Practical SAT Solving

Logic in AI Chapter 7. Mausam (Based on slides of Dan Weld, Stuart Russell, Subbarao Kambhampati, Dieter Fox, Henry Kautz )

The Constrainedness Knife-Edge. Toby Walsh. APES Group. Glasgow, Scotland.

Chapter 6 Constraint Satisfaction Problems

Chapter 7 R&N ICS 271 Fall 2017 Kalev Kask

Artificial Intelligence

Exploiting a Theory of Phase Transitions in Three-Satisability. Problems. David M. Pennock and Quentin F. Stout. University ofmichigan

Planning as Satisfiability

Constraint satisfaction search. Combinatorial optimization search.

Phase transitions in Boolean satisfiability and graph coloring

Unsatised Variables in Local Search. Department of AI Mechanized Reasoning Group. University of Edinburgh IRST, Trento &

Generating Hard Satisfiable Formulas by Hiding Solutions Deceptively

Constraint Satisfaction 101

arxiv: v1 [cs.cc] 5 Dec 2018

The Wumpus Game. Stench Gold. Start. Cao Hoang Tru CSE Faculty - HCMUT

Analysis of Random Noise and Random Walk Algorithms for Satisfiability Testing

An Empirical Analysis of Search in GSAT. Abstract. We describe an extensive study of search in GSAT, an approximation procedure for

Methods for finding optimal configurations

Towards Efficient Sampling: Exploiting Random Walk Strategies

MAX-2-SAT: How Good is Tabu Search in the Worst-Case?

Introduction to Artificial Intelligence Propositional Logic & SAT Solving. UIUC CS 440 / ECE 448 Professor: Eyal Amir Spring Semester 2010

Summary. Local Search and cycle cut set. Local Search Strategies for CSP. Greedy Local Search. Local Search. and Cycle Cut Set. Strategies for CSP

Chapter 3 Deterministic planning

Analyzing a Heuristic Strategy. for Constraint-Satisfaction and Scheduling. Mark D. Johnston and Steven Minton. Abstract

Propositional Reasoning

Price: $25 (incl. T-Shirt, morning tea and lunch) Visit:

Phase Transition and Network Structure in Realistic SAT Problems

Introduction. Chapter Context

Heuristics for Efficient SAT Solving. As implemented in GRASP, Chaff and GSAT.

Worst-Case Upper Bound for (1, 2)-QSAT

Warped Landscapes and Random Acts of SAT Solving

COMP219: Artificial Intelligence. Lecture 20: Propositional Reasoning

Foundations of Artificial Intelligence

Pruning Search Space for Weighted First Order Horn Clause Satisfiability

Constraint Satisfaction

Phase Transitions and Computational Complexity Bart Selman

SAT-based Combinational Equivalence Checking

Simple Random Logic Programs

A Random Jump Strategy for Combinatorial Search

Detecting Backdoor Sets with Respect to Horn and Binary Clauses

Phase Transitions and Local Search for k-satisfiability

A Comparison of Methods for Solving MAX-SAT. Problems. Steve Joy, John E. Mitchell.

Solving Dynamic Constraint Satisfaction Problems by Identifying Stable Features

Incremental Deterministic Planning

CS:4420 Artificial Intelligence

Interleaved Alldifferent Constraints: CSP vs. SAT Approaches

Propositional Logic: Models and Proofs

An instance of SAT is defined as (X, S)

Logic in AI Chapter 7. Mausam (Based on slides of Dan Weld, Stuart Russell, Subbarao Kambhampati, Dieter Fox, Henry Kautz )

Clause/Term Resolution and Learning in the Evaluation of Quantified Boolean Formulas

Distributed Problem Solving and the Boundaries of Self-Configuration in Multi-hop Wireless Networks

CS360 Homework 12 Solution

Propositional Logic Language

Practical SAT Solving

CS 301: Complexity of Algorithms (Term I 2008) Alex Tiskin Harald Räcke. Hamiltonian Cycle. 8.5 Sequencing Problems. Directed Hamiltonian Cycle

Projection in Logic, CP, and Optimization

Constraint sat. prob. (Ch. 6)

Adaptive Lookahead for Answer Set Computation

Planning Graphs and Propositional Clause-Learning

Hiding Satisfying Assignments: Two are Better than One

constraints with more parsimony and makes the encodings more economical, either by reducing the number of clauses or variables or both. [Kautz & Selma

On evaluating decision procedures for modal logic

EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS

Steven Minton 1 Mark D. Johnston 2 Andrew B. Philips 1 Philip Laird 3. NASA Ames Research Center 3700 San Martin Drive, AI Research Branch

MATHEMATICAL ENGINEERING TECHNICAL REPORTS. Deductive Inference for the Interiors and Exteriors of Horn Theories

Using Graph-Based Representation to Derive Hard SAT instances

Proof Complexity Meets Algebra

Constraint Satisfaction [RN2] Sec [RN3] Sec Outline

Disjunctive Temporal Reasoning in Partially Ordered Models of Time

Computational Logic. Davide Martinenghi. Spring Free University of Bozen-Bolzano. Computational Logic Davide Martinenghi (1/30)

Lecture 2 Propositional Logic & SAT

Modern WalkSAT algorithms

CS 4700: Artificial Intelligence

Local Search (Greedy Descent): Maintain an assignment of a value to each variable. Repeat:

Hiding Satisfying Assignments: Two are Better than One

Logical Inference. Artificial Intelligence. Topic 12. Reading: Russell and Norvig, Chapter 7, Section 5

CS/COE

Lecture 9: The Splitting Method for SAT

From Unsolvable to Solvable: An Exploration of Simple Changes

Population migration: a meta-heuristics for stochastic approaches to constraint satisfaction problems

NP-COMPLETE PROBLEMS. 1. Characterizing NP. Proof

Title: Logical Agents AIMA: Chapter 7 (Sections 7.4 and 7.5)

Transcription:

From: AAAI-94 Proceedings. Copyright 1994, AAAI (www.aaai.org). All rights reserved. Improving Repair-based Constraint Satisfaction Methods by Value Propagation Nobuhiro Yugami Yuiko Ohta Mirotaka Nara FUJITSU LABORATORIES LTD. 10 15, Kamikodanaka Nakahara-ku, Kawasaki 2 11, Japan yugami@flab.fujitsu.co.jp Abstract A constraint satisfaction problem (CSP) is a problem to find an assignment that satisfies given constraints. An interesting approach to CSP is a repair-based method that first generates an initial assignment, then repairs it by minimizing the number of conflicts. Min-conflicts hill climbing (MCHC) and GSAT are typical examples of this approach. A serious problem with this approach is that it is sometimes trapped by local minima. This makes it difficult to use repair-based methods for solving problems with many local minima. We propose a new procedure, EFLOP, for escaping from local minima. EFLOP changes the values of mutually dependent variables by propagating changes through satisfied constraints. We can greatly improve the performance of repair- based methods by combining them with EFLOP. We tested EFLOP with graph colorability problems, randomly generated binary CSPs and propositional satisfiability problems. EFLOP improved the performance of MCHC and GSAT for all experiments and was more efficient for large and difficult problems. Introduction A constraint satisfaction problem (CSP) is a problem to find an assignment that satisfies given constraints. Approaches to solving CSPs can be classified into constructive methods and repair-based methods [Minton et al. 921. Constructive methods are based on a tree search and find a solution by incrementally extending a consistent partial assignment. Constraint directed search [Fox 871, arc and path consistency algorithms [Mohr & Henderson 863 and intelligent backtrackings such as dependency directed backtracking [Doyle 791 fall into this group. Repair-based methods are based on a local search. They first generate an initial assignment with conflicts, then repair it by minimizing the number of conflicts. Min-conflicts hill climbing (MCHC) [Minton et al. 901 and GSAT [Selman, Levesque & Mitchell 921 are typical examples. For small- and medium-scale problems, constructive methods show good performance. However, repair-based methods are more practical for large-scale problems. Repair-based methods use local search techniques such as hill climbing to minimize the number of conflicts. Local search techniques do not provide the capability to escape from local minima. It is, thus difficult to solve a CSP with many local minima by using a repair-based method. One solution is to use a more powerful local search techniques such as simulated annealing (SA). Johnson et al. applied SA to graph colorability problems [Johnson et al. 911. However, this has a major disadvantage that local search techniques capable of escaping from local minima such as SA, require a very long time for problem solving. Another solution is combining local search methods with a special procedure for escaping from local minima. [Selman & Kautz, 931 and [Morris, 931 proposed constraint weighting for escaping from local minima. We propose a new procedure EFLOP (Escaping Erom Local @tima by gropagation) to resolve this problem. EFLOP changes the values of mutually dependent variables for escaping from local minima. Satisfied constraints are used for finding such a set of variables and their new values. Repair-based methods call EFLOP when they are trapped by local minima and restart search from an output assignment of EFLOP. 344 Constraint Satisfaction

epair-based methods and their limitations Repair-based methods solve a minimization problem of the number of conflicts. If an original CSP is solvable, then the minimization problem has an optimum assignment with no conflict and the optimum assignment is a solution of the original CSP. Min-conflicts hill climbing (MCHC) [Minton et al. 901 solves this minimization problem by repairing an assignment with a following Min-conflicts simple value selection heuristic. heuristic Select a variable in conflict randomly. Assign it a value that minimizes the number of conflicts. Break ties randomly. Lh, JL 4 k-. sub-problem 1.-.M sub-problem 2-4 Figure 1: A local minimum assignment A local minimum assignment and a division into consistent sub-problems of a graph 3- colorability problem. Only a constraint between sub-problems is violated. This heuristic guarantees that the number of conflicts of the new assignment is fewer than or equal to that of the old assignment because it selects the present value if all other values increase the number of conflicts. The number of conflicts, thus decreases monotonically. MCHC could solve certain classes of CSPs, e.g., n- queens problems and graph colorability problems for dense graphs but could not solve other classes of CSPs such as graph colorability problems for sparse graphs [Minton et al. 921. This is because MCHC has few ability to escape from local minima of the minimization problem. We explain this with an example of a graph colorability problem. Figure 1 shows a local minimum assignment for a 3-colorability problem used by Selman [Selman & Kautz 931 as an example escape from. Each node (variable) that the basic GSAT could not should be colored with one of the three colors but the color must be different from colors of neighboring nodes. In Figure 1, only one constraint, x#y, is violated. Min-conflicts heuristic selects x (or y) and tries to assign a new value to x. However, x=green and x=blue cause two conflicts respectively and x=red, the present value, minimizes the number of conflicts. Min-conflicts heuristic, thus selects red for x and MCHC can not escape from this local minimum assignment. We first discuss the property of local assignments minimum of CSPs. The example in Figure 1 can be divided into two sub-problems where all constraints within each sub-problem are satisfied but a constraint between sub-problems is violated. In general, if an assignment is local minimum, a problem can be divided into consistent sub-problems where conflicts occur only between different sub-problems. In such a case, changing the value of the conflicting variable causes new conflicts in the sub-problem that the variable belongs to. This increases the total number of conflicts and makes it impossible to escape from local minima with hill climbing like methods such as MCHC and GSAT. This property of local minima leads directly to the following procedure for escaping from them. Step 1: Find a consistent sub-problem that involves a variable in g:onflict. Step 2: Change the values of variables in the sub- problem so that the new values satisfy all constraints in the sub-problem. However, this is not practical because Step 2 requires to solve the sub-problem and takes a long time if the sub- problem is not small. EFLOP avoids this difficulty by combining Step 1 and Step 2, extending the sub-problem incrementally by propagating the changes of values of variables through satisfied constraints. Figure 2 shows the procedure of EFLOP. EFLOP first selects a variable in conflict randomly and changes the value of it. If this change causes a satisfied constraint become unsatisfied, EFLOP tries to resolve it by changing a value of another variable in the constraint. If to Techniques 345

procedure EFLOP Input : a local minimum assignment; Output: an assignment that is not local minimum; begin select a variable v in conflict randomly; change v s value randomly; v := {v}; while possible select a constraint conditions; c that satisfies following (cl) c is satisfied before EFLOP is called; (~2) c is not satisfied now; (~3) there is a variable v in c and its value a such that (c3-1) v 6?! v; (~3-2) v s present value is consistent with old values of variables in V, (~3-3) v=a makes c satisfied; (~3-4) v=u is consistent with new values of variables in V; change v s value to a; tivtov; end-of-while; end; Figure 2: EFLOP procedure condition (~3-2). The sub-problem is also consistent after EFLOP terminates because of the condition (~3-4). We explain EFLOP procedure in detail using the example in Figure 1. EFLOP first selects a variable in conflict and its new value (new value must be different from the present value) randomly. Let x and green be selected. EFLOP changes the value of x to green and initializes the set of changed variables V = {x}. This new value, green, violates two constraints, xfz and xfw. Both of these constraints satisfy conditions (cl)-(c3), EFLOP thus selects one of them. Let xfz be selected. EFLOP tries to satisfy it by changing a value of a variable in it, i.e., x or z but EFLOP doesn t change x s value because the condition (~3-1) requires that each variable is changed at most once. Two variable-value pairs, z=red and z=blue, satisfy condition (~3). EFLOP selects z=blue because z=red causes one new conflict but z=blue causes no new conflict. EFLOP changes z s value to blue and adds z to V. Next, EFLOP tries to satisfy xfw. Because of the same reason in xfz, EFLOP changes w s value to blue. EFLOP terminates because there is no constraint that satisfies (cl)-(~3). EFLOP then assigns green to x, blue to z and blue to w, i.e., EFLOP changes the values of variables in sub-problem 1 (Figure 1) to other consistent values between them and the new assignment EFLOP satisfies all constraints. generated by this second change causes a new constraint violation, EFLOP tries to resolve it by changing the value of the third variable. This is continued until no new conflict can be resolved by changing values of variables that aren t changed yet. If more than one variable-value pairs satisfy the condition (c3), EFLOP selects one of them with a following heuristic so as not to propagate value changes to too many variables. EFLOP heuristic Select a pair of a variable and its value that minimizes the number of constraints which are satisfied before EFLOP is called but will be violated after the change. Break ties randomly. Experiments In this section, we show the effect of EFLOP by using graph colorability problems, randomly generated binary CSPs and propositional satisfiability problems. EFLOP is not a method for solving CSP, but is a method for improving the performance of repair-based methods. We combined EFLOP with MCHC for binary CSPs and for colorability problems, and combined with GSAT for SAT We compared their performance with and without EFLOP. MCHC (GSAT) with EFLOP calls EFLOP whenever it trapped by local minima and restarts from an output assignment of EFLOP. MCHC (GSAT) without EFLOP restarts from a randomly generated initial assignment when it is trapped by local minima. We used C language on a SPARCstation2 for all experiments. The sub-problem defined by the set of changed variables V and constraints between variables in V, is consistent before EFLOP is called because of the Graph Colorability Problems We generated 3-colorability problems of sparse graphs by the way in [Minton et al. 921. We first divided N nodes 346 Constraint Satisfaction

I nodes I edges I MCHC IMCHC+EFLOP I Table 1: Average numbers of hill climbing steps for graph 3-colorability problems Table 2: 50% solvable strength and average number of hill climbing steps for binary CSPs 0.5 0.6 0.64 0.7 0.5 0.6 0.64 0.7 strength of a constraint strength of a constraint Figure 3: Probability of solvability of random CSPs with 20 variables, 10 values for each variable and 40 constraints Figure 4: Average numbers of hill climbing steps for 20 variables CSPs to 3 groups with N/3 nodes and randomly created edges between nodes in different groups. If the generated graph had unconnected components, we rejected it. This generation guarantees the solvability of generated problems. We used the graphs with 2N edges because [Minton et al. 921 reported the poor performance of MCHC for such problems. Table 1 shows the average numbers of hill climbing steps of MCHC (average of 100 problems for each N) with and without EFLOP for 3-colorability problems with 30-150 nodes. EFLOP could improved MCHC drastically and its effect was larger for large problems Binary than that of small problems. CSPs We generated binary CSPs with 4 parameters, the number of variables, N, the domain size of each variable, D, the number of constraints, M and the strength of the constraint, S. The strength of the constraint is the ratio of the number of forbidden value pairs to the number of all value pairs of two variables in the constraint. This means that ( l-s>d2 value pairs satisfy the constraint and SD2 value pairs violate the constraint. When generating a problem, we first selected M variable pairs as constraints and for each constraint (variable pair), we selected (l-s)d 2 permitted value pairs randomly. We did all selections randomly, so a generated problem may not have a solution. This way of generating CSPs was based on [Freuder & Wallace 921. We first tested EFLOP s effect on CSPs with 20 variables, 10 possible values for each variable and 40 Techniques 347

constraints (N=20, D=lO, M=40). Figure 3 shows the probability of solvability, the ratio of solvable problems to generated problems and figure 4 shows the average numbers of hill climbing steps (average of 100 solvable problems for each strength). MCHC with EFLOP was faster than without at all strength and the difference was biggest at the 50% solvable strength, the strength at which a half of randomly generated problems were solvable. This is because that when S was near this value, the ratio of the numbers of local minima to the number of solutions was large, so MCHC trapped by local minima with high probability. Table 2 shows 50% solvable strengths and average numbers of hill climbing steps (average of 100 solvable problems) at the strengths of CSPs with 20, 40 and 60 variables. For each case, we set M=2N. It is interesting to note that the strength was almost constant. This suggests that the 50% solvable strength depends only on the constraints-to-variables ratio and domain size. The result was that EFLOP could improve the performance of MCHC for all cases and the improvement was greater for large problems. MCHC with EFLOP was about 3 times faster than MCHC without EFLOP for problems with 20 variables and the improvement was more than 10 times for problems with 60 variables. Satisfiability Problems A propositional satisfiability problem (SAT) is a problem to determine whether a given logical formula can be satisfied or not, and to find a truth assignment that satisfies the formula if it is satisfiable. The formula is given in conjunctive normal form and a clause is a constraint that at least one literal in the clause should be true. Selman et al. proposed an efficient repair-based method for SAT, GSAT [Selman, Levesque & Mitchell 921 and extended GSAT with clause weighting and random walk to overcome the inability to escape from local minima [Selman & Kautz 931. We used basic GSAT and didn t use these extensions for our experiments because we wanted to know the effect of EFLOP alone. We generated SAT with three parameters, the number of variables, N, and the number of clauses, M and the length of clause, K. For each clause, we first selected K variables randomly and negated each variable with probability 0.5. This generation was based on [Mitchell, Selman & Levesque 921. We first tested EFLOP with 3-SAT (K=3) problems. Mitchell et al. [Mitchell, Selman & Levesque 921 reported Table 3: Average number of flips for 3-SAT Table 4: Average number of flips for 3-, 4- and -SAT with 50 variables the hardness of 3-SAT. Their conclusion was that 3-SAT was most difficult when the ratio of clauses to variables was 50% satisfiable ratio, the ratio at which a randomly generated problem was satisfiable with probability 0.5, and this ratio was 4.3 for 3-SAT. Table 3 shows the average number of flips (average of 100 satisfiable problems) for randomly generated 3-SAT with M=4.3N. GSAT with EFLOP was about twice as fast as GSAT without EFLOP. The difference between them didn t depend on the number of variables. We also examined the dependency of the EFLOP s effect on clause length. Table 4 shows the 50% satisfiable clauses-to-variables ratio for 3-, 4- and 5-SAT and the average number of flips (average of 100 satisfiable problems) at the ratio. EFLOP was more effective for problems with longer clauses and GSAT with EFLOP was about 10 times faster than GSAT without EFLOP for 5- SAT We have proposed a new procedure, EFLOP, for improving the performance of repair-based constraint satisfaction methods such as min-conflicts hill climbing (MCHC) and GSAT. Repair-based methods solve a CSP by minimizes the number of conflicts. The most serious problem of these methods is that they can t escape from 348 Constraint Satisfaction

local minima. EFLOP propagates changes of values through satisfied constraints and changes the values of mutually dependent variables at once. This enables to escape from local minima and thus improves the performance of repair-based methods. EFLOP is independent on problem domains and can be used with any repair-based method. We tested EFLOP s effect by using graph colorability problems, randomly generated binary CSPs and satisfiability problems. EFLOP improved the performance of MCHC and GSAT, and the improvement was greater for larger and more difficult problems. An interesting question is how effective EFLOP is for structured problems such as planning problems. In structured problems, some variables strongly depend on each other and a value change for one variable causes many conflicts between such variables. In a planning problem, a selection of an operator is strongly dependent on previous operators and affects selections for following operators through its preconditions and post conditions. This property causes many local minima and makes it very difficult to solve structured problems by repair-based methods [Kautz & Mitchell 921. EFLOP changes values of such mutually dependent variables at once, and the ability to do this will improve the performance of repair- based methods for structured problems. Heuristic Repair Method, Proc. of AAAI-90, pp.17-24, 1990 [Minton et al. 901 S. Minton, M. D. Johnston, A. B. Phillips and P. Lair-d: Minimizing conflicts: a heuristic repair method for constraint satisfaction and scheduling problems, Artificial Intelligence Vol. 58, pp. 16 l-203, 1992 [Mitchell, Selman & Levesque 921 D. Mitchell, B. Selman and H. Levesque: Hard and Easy Distributions of SAT Problems, Proc. of AAAI-92, pp.459-465, 1992 [Mohr, Thomas & Henderson 861 R. Mohr and T. C. Henderson, Arc and Path Consistency Revisited, Artificial Intelligence Vol. 28 pp.225-233, 1986 [Morris 931 P. Morris, The Breakout Method for Escaping From Local Minima, Proc. of AAAI-93, pp.40-45, 1993 [Selman, Levesque & Mitchell 921 B. Selman, H. Levesque and D. Mitchell: A New Method for solving Hard Satisfiability Problems, Proc. of AAAI-92, ~~440-446, 1992 [Selman & Kautz 931 B. Selman and H. Kautz: Domain- Independent Extensions to GSAT: Solving Large Structured Satisfiability Problems, Proc. of IJCAI-93, pp.290-295, 1993 References [Doyle 791 J. Doyle : A Truth Maintenance System, Artificial Intelligence, Vol. 12, pp.23 l-272, 1979 [Fox 871 M. Fox : Constraint Directed Search: A Case Study of Job-Shop Scheduling, Morgan Kaufman Publishers, Inc., 1887 [Freuder & Wallace 921 E.C. Freuder and R.J.Wallace : Partial constraint satisfaction, Artificial Intelligence Vol. 58, pp.21-70, 1992 [Kautz & Selman 921 H. kautz and B. Selman : Planning as Satisfiability, Proc. ECAI 92, pp.359-363 [Johnson et al. 911 A. K. Johnson, C. R. Aragon, L. A. McGeoch and C. Schevon: Optimization by simulated annealing: an experimental evaluation Part II, Gperating Research Vol. 39, pp.378406, 199 1 [Minton et al. 901 S. Minton, M. D. Johnston, A. B. Philips and P. Laird: Solving Large-Scale Constraint Satisfaction and Scheduling Problems Using a Techniques 349