Principles of AI Planning

Similar documents
Principles of AI Planning

Chapter 3 Deterministic planning

Principles of AI Planning

Principles of AI Planning

Deterministic planning

Principles of AI Planning

Search and Lookahead. Bernhard Nebel, Julien Hué, and Stefan Wölfl. June 4/6, 2012

Foundations of Artificial Intelligence

Foundations of Artificial Intelligence

Foundations of Artificial Intelligence

Accuracy of Admissible Heuristic Functions in Selected Planning Domains

Tautologies, Contradictions, and Contingencies

Strengthening Landmark Heuristics via Hitting Sets

Optimal Planning for Delete-free Tasks with Incremental LM-cut

Introduction to Artificial Intelligence. Logical Agents

The exam is closed book, closed calculator, and closed notes except your one-page crib sheet.

Truth-Functional Logic

COMP219: Artificial Intelligence. Lecture 20: Propositional Reasoning

Formal Verification Methods 1: Propositional Logic

Lecture 9: Search 8. Victor R. Lesser. CMPSCI 683 Fall 2010

Propositional Logic: Models and Proofs

Lecture 9: The Splitting Method for SAT

Foundations of Artificial Intelligence

Computational Logic. Davide Martinenghi. Spring Free University of Bozen-Bolzano. Computational Logic Davide Martinenghi (1/30)

Decision Procedures for Satisfiability and Validity in Propositional Logic

A brief introduction to Logic. (slides from

Deterministic Planning and State-space Search. Erez Karpas

Chapter 11: Automated Proof Systems (1)

Equivalences. Proposition 2.8: The following equivalences are valid for all formulas φ, ψ, χ: (φ φ) φ. Idempotency Idempotency Commutativity

Example: Robot Tasks. What is planning? Background: Situation Calculus. Issues in Planning I

Substitution Theorem. Equivalences. Proposition 2.8 The following equivalences are valid for all formulas φ, ψ, χ: (φ φ) φ Idempotency

Chapter 7 R&N ICS 271 Fall 2017 Kalev Kask

An instance of SAT is defined as (X, S)

Warm-Up Problem. Is the following true or false? 1/35

Unified Definition of Heuristics for Classical Planning

Game Theory. Kuhn s Theorem. Bernhard Nebel, Robert Mattmüller, Stefan Wölfl, Christian Becker-Asano

Deductive Systems. Lecture - 3

Logic and Inferences

Comp487/587 - Boolean Formulas

Clause/Term Resolution and Learning in the Evaluation of Quantified Boolean Formulas

Solvers for the Problem of Boolean Satisfiability (SAT) Will Klieber Aug 31, 2011

Classical Propositional Logic

LOGIC PROPOSITIONAL REASONING

Alfredo von Reckow. Considerations on the P NP question.

First-order resolution for CTL

Propositional Logic: Evaluating the Formulas

3 Propositional Logic

Conjunctive Normal Form and SAT

Logical Agent & Propositional Logic

Complexity Theory VU , SS The Polynomial Hierarchy. Reinhard Pichler

Outline. Complexity Theory EXACT TSP. The Class DP. Definition. Problem EXACT TSP. Complexity of EXACT TSP. Proposition VU 181.

Logical Agents. Outline

Kecerdasan Buatan M. Ali Fauzi

Solving Non-deterministic Planning Problems with Pattern Database Heuristics. KI 2009, Paderborn

Parallel Encodings of Classical Planning as Satisfiability

Planning Graphs and Knowledge Compilation

Bernhard Nebel, Julien Hué, and Stefan Wölfl. June 27 & July 2/4, 2012

2.5.2 Basic CNF/DNF Transformation

Machine Learning: Exercise Sheet 2

Lecture 3: Decision Trees

Learning Decision Trees

Part 1: Propositional Logic

Chapter 11: Automated Proof Systems

Exercises 1 - Solutions

Useless Actions are Useful

Intelligent Agents. Formal Characteristics of Planning. Ute Schmid. Cognitive Systems, Applied Computer Science, Bamberg University

7. Propositional Logic. Wolfram Burgard and Bernhard Nebel

An Introduction to SAT Solving

Heuristics for Planning with SAT and Expressive Action Definitions

CHAPTER 10. Gentzen Style Proof Systems for Classical Logic

Branching. Teppo Niinimäki. Helsinki October 14, 2011 Seminar: Exact Exponential Algorithms UNIVERSITY OF HELSINKI Department of Computer Science

Planning as Satisfiability

EECS 349:Machine Learning Bryan Pardo

Lecture Notes on SAT Solvers & DPLL

Cardinality Networks: a Theoretical and Empirical Study

3.4 Set Operations Given a set A, the complement (in the Universal set U) A c is the set of all elements of U that are not in A. So A c = {x x / A}.

Propositional Calculus: Formula Simplification, Essential Laws, Normal Forms

Deduction by Daniel Bonevac. Chapter 3 Truth Trees

CS:4420 Artificial Intelligence

Artificial Intelligence

NP-Completeness and Boolean Satisfiability

Introduction to Arti Intelligence

Sampling from Bayes Nets

Mathematical Logic Propositional Logic - Tableaux*

Conditional planning. Chapter Nondeterministic operators

CS6375: Machine Learning Gautam Kunapuli. Decision Trees

Mathematical Logic Part Three

EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS

Agenda. Artificial Intelligence. Reasoning in the Wumpus World. The Wumpus World

Domain-Independent Construction of Pattern Database Heuristics for Cost-Optimal Planning

Artificial Intelligence

When Acyclicity Is Not Enough: Limitations of the Causal Graph

Decision Tree Learning

Title: Logical Agents AIMA: Chapter 7 (Sections 7.4 and 7.5)

Fact-Alternating Mutex Groups for Classical Planning (Extended Abstract)

CS 380: ARTIFICIAL INTELLIGENCE PREDICATE LOGICS. Santiago Ontañón

CS250: Discrete Math for Computer Science. L6: CNF and Natural Deduction for PropCalc

Rule-Based Classifiers

CS 173 Lecture 2: Propositional Logic

Transcription:

Principles of AI Planning 5. Planning as search: progression and regression Albert-Ludwigs-Universität Freiburg Bernhard Nebel and Robert Mattmüller October 30th, 2013

Introduction Classification Planning as (classical) search October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 2 / 49

What do we mean by search? is a very generic term. Every algorithm that tries out various alternatives can be said to search in some way. Here, we mean classical search algorithms. nodes are expanded to generate successor nodes. s: breadth-first search, A, hill-climbing,... To be brief, we just say search in the following (not classical search ). Introduction Classification October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 4 / 49

Do you know this stuff already? Introduction Classification We assume prior knowledge of basic search algorithms: uninformed vs. informed systematic vs. local There will be a small refresher in the next chapter. Background: Russell & Norvig, Artificial Intelligence A Modern Approach, Ch. 3 (all of it), Ch. 4 (local search) October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 5 / 49

in planning Introduction Classification search: one of the big success stories of AI many planning algorithms based on classical AI search (we ll see some other algorithms later, though) will be the focus of this and the following chapters (the majority of the course) October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 6 / 49

Satisficing or optimal planning? Must carefully distinguish two different problems: satisficing planning: any solution is OK (although shorter solutions typically preferred) optimal planning: plans must have shortest possible length Introduction Classification Both are often solved by search, but: details are very different almost no overlap between good techniques for satisficing planning and good techniques for optimal planning many problems that are trivial for satisficing planners are impossibly hard for optimal planners October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 7 / 49

Planning by search How to apply search to planning? many choices to make! Choice 1: direction progression: forward from initial state to goal regression: backward from goal states to initial state bidirectional search Introduction Classification October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 8 / 49

Planning by search How to apply search to planning? many choices to make! Choice 2: space representation search nodes are associated with states ( state-space search) search nodes are associated with sets of states Introduction Classification October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 8 / 49

Planning by search How to apply search to planning? many choices to make! Choice 3: algorithm uninformed search: depth-first, breadth-first, iterative depth-first,... heuristic search (systematic): greedy best-first, A, Weighted A, IDA,... heuristic search (local): hill-climbing, simulated annealing, beam search,... Introduction Classification October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 8 / 49

Planning by search How to apply search to planning? many choices to make! Choice 4: control heuristics for informed search algorithms pruning techniques: invariants, symmetry elimination, partial-order reduction, helpful actions pruning,... Introduction Classification October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 8 / 49

-based satisficing planners FF (Hoffmann & Nebel, 2001) search direction: forward search search space representation: single states search algorithm: enforced hill-climbing (informed local) heuristic: FF heuristic (inadmissible) pruning technique: helpful actions (incomplete) Introduction Classification one of the best satisficing planners October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 9 / 49

-based optimal planners Fast Downward Stone Soup (Helmert et al., 2011) search direction: forward search search space representation: single states search algorithm: A (informed systematic) heuristic: multiple admissible heuristics combined into a heuristic portfolio (LM-cut, M&S, blind,... ) pruning technique: none Introduction Classification one of the best optimal planners October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 10 / 49

Our plan for the next lectures Choices to make: 1 search direction: progression/regression/both this chapter 2 search space representation: states/sets of states this chapter 3 search algorithm: uninformed/heuristic; systematic/local next chapter 4 search control: heuristics, pruning techniques following chapters Introduction Classification October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 11 / 49

October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 12 / 49

Planning by forward search: progression : Computing the successor state app o (s) of a state s with respect to an operator o. planners find solutions by forward search: start from initial state iteratively pick a previously generated state and progress it through an operator, generating a new state solution found when a goal state generated pro: very easy and efficient to implement October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 14 / 49

space representation in progression planners Two alternative search spaces for progression planners: 1 search nodes correspond to states when the same state is generated along different paths, it is not considered again (duplicate detection) pro: save time to consider same state again con: memory intensive (must maintain closed list) 2 search nodes correspond to operator sequences different operator sequences may lead to identical states (transpositions); search does not notice this pro: can be very memory-efficient con: much wasted work (often exponentially slower) first alternative usually preferable in planning (unlike many classical search benchmarks like 15-puzzle) October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 15 / 49

planning example (depth-first search) where search nodes correspond to operator sequences (no duplicate detection) s 0 S October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 16 / 49

planning example (depth-first search) where search nodes correspond to operator sequences (no duplicate detection) s 0 S October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 16 / 49

planning example (depth-first search) where search nodes correspond to operator sequences (no duplicate detection) s 0 S October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 16 / 49

planning example (depth-first search) where search nodes correspond to operator sequences (no duplicate detection) s 0 S October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 16 / 49

planning example (depth-first search) where search nodes correspond to operator sequences (no duplicate detection) s 0 S October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 16 / 49

planning example (depth-first search) where search nodes correspond to operator sequences (no duplicate detection) s 0 S October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 16 / 49

planning example (depth-first search) where search nodes correspond to operator sequences (no duplicate detection) s 0 S October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 16 / 49

planning example (depth-first search) where search nodes correspond to operator sequences (no duplicate detection) s 0 S October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 16 / 49

planning example (depth-first search) where search nodes correspond to operator sequences (no duplicate detection) s 0 S October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 16 / 49

planning example (depth-first search) where search nodes correspond to operator sequences (no duplicate detection) s 0 S October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 16 / 49

October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 17 / 49

Forward search vs. backward search Going through a transition graph in forward and backward directions is not symmetric: forward search starts from a single initial state; backward search starts from a set of goal states when applying an operator o in a state s in forward direction, there is a unique successor state s ; if we applied operator o to end up in state s, there can be several possible predecessor states s most natural representation for backward search in planning associates sets of states with search nodes October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 19 / 49

Planning by backward search: regression : Computing the possible predecessor states regr o (G) of a set of states G with respect to the last operator o that was applied. planners find solutions by backward search: start from set of goal states iteratively pick a previously generated state set and regress it through an operator, generating a new state set solution found when a generated state set includes the initial state Pro: can handle many states simultaneously Con: basic operations complicated and expensive October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 20 / 49

space representation in regression planners identify state sets with logical formulae (again): search nodes correspond to state sets each state set is represented by a logical formula: ϕ represents {s S s = ϕ} many basic search operations like detecting duplicates are NP-hard or conp-hard October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 21 / 49

planning example (depth-first search) I γ October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 22 / 49

planning example (depth-first search) γ I γ October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 22 / 49

planning example (depth-first search) ϕ 1 = regr (γ) ϕ 1 γ I γ October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 22 / 49

planning example (depth-first search) ϕ 1 = regr (γ) ϕ 2 ϕ 2 = regr (ϕ 1 ) ϕ 1 γ I γ October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 22 / 49

planning example (depth-first search) ϕ 3 ϕ 1 = regr (γ) ϕ 2 ϕ 2 = regr (ϕ 1 ) ϕ 3 = regr (ϕ 2 ),I = ϕ 3 ϕ 1 γ I γ October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 22 / 49

for planning tasks Definition ( planning task) A planning task is a planning task if all operators are operators and the goal is a conjunction of atoms. for planning tasks is very simple: Goals are conjunctions of atoms a 1 a n. First step: Choose an operator that makes none of a 1,...,a n false. Second step: Remove goal atoms achieved by the operator (if any) and add its preconditions. Outcome of regression is again conjunction of atoms. Optimization: only consider operators making some a i true October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 23 / 49

regression Definition ( regression) Let ϕ = ϕ 1 ϕ n be a conjunction of atoms, and let o = χ,e be a operator which adds the atoms a 1,...,a k and deletes the atoms d 1,...,d l. The regression of ϕ with respect to o is if a i = d j for some i,j sregr o (ϕ) := if ϕ i = d j for some i,j χ ({ϕ 1,...,ϕ n } \ {a 1,...,a k }) otherwise Note: sregr o (ϕ) is again a conjunction of atoms, or. October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 24 / 49

regression example o 1 o 2 o 3 Note: Predecessor states are in general not unique. This picture is just for illustration purposes. o 1 = on clr, o 2 = on clr clr, o 3 = ont clr clr, on ont clr clr on on clr clr ont on γ = on on ϕ 1 = sregr o3 (γ) = ont clr clr on ϕ 2 = sregr o2 (ϕ 1 ) = on clr clr ont ϕ 3 = sregr o1 (ϕ 2 ) = on clr on ont October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 25 / 49

for general planning tasks With disjunctions and conditional effects, things become more tricky. How to regress a (b c) with respect to q,d b? The story about goals and subgoals and fulfilling subgoals, as in the case, is no longer useful. We present a general method for doing regression for any formula and any operator. Now we extensively use the idea of representing sets of states as formulae. October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 26 / 49

Effect preconditions Definition (effect precondition) The effect precondition EPC l (e) for literal l and effect e is defined as follows: EPC l (l) = EPC l (l ) = if l l (for literals l ) EPC l (e 1 e n ) = EPC l (e 1 ) EPC l (e n ) EPC l (χ e) = EPC l (e) χ Intuition: EPC l (e) describes the situations in which effect e causes literal l to become true. October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 27 / 49

Effect precondition examples EPC a (b c) = EPC a (a (b a)) = ( b) EPC a ((c a) (b a)) = ( c) ( b) c b October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 28 / 49

Effect preconditions: connection to change sets Lemma (A) Let s be a state, l a literal and e an effect. Then l [e] s if and only if s = EPC l (e). Proof. Induction on the structure of the effect e. Base case 1, e = l: l [l] s = {l} by definition, and s = EPC l (l) = by definition. Both sides of the equivalence are true. Base case 2, e = l for some literal l l: l / [l ] s = {l } by definition, and s = EPC l (l ) = by definition. Both sides are false. October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 29 / 49

Effect preconditions: connection to change sets Lemma (A) Let s be a state, l a literal and e an effect. Then l [e] s if and only if s = EPC l (e). Proof. Induction on the structure of the effect e. Base case 1, e = l: l [l] s = {l} by definition, and s = EPC l (l) = by definition. Both sides of the equivalence are true. Base case 2, e = l for some literal l l: l / [l ] s = {l } by definition, and s = EPC l (l ) = by definition. Both sides are false. October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 29 / 49

Effect preconditions: connection to change sets Lemma (A) Let s be a state, l a literal and e an effect. Then l [e] s if and only if s = EPC l (e). Proof. Induction on the structure of the effect e. Base case 1, e = l: l [l] s = {l} by definition, and s = EPC l (l) = by definition. Both sides of the equivalence are true. Base case 2, e = l for some literal l l: l / [l ] s = {l } by definition, and s = EPC l (l ) = by definition. Both sides are false. October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 29 / 49

Effect preconditions: connection to change sets Proof (ctd.) Inductive case 1, e = e 1 e n : l [e] s iff l [e 1 ] s [e n ] s (Def [e 1 e n ] s ) iff l [e ] s for some e {e 1,...,e n } iff s = EPC l (e ) for some e {e 1,...,e n } (IH) iff s = EPC l (e 1 ) EPC l (e n ) iff s = EPC l (e 1 e n ). (Def EPC) Inductive case 2, e = χ e : l [χ e ] s iff l [e ] s and s = χ (Def [χ e ] s ) iff s = EPC l (e ) and s = χ (IH) iff s = EPC l (e ) χ iff s = EPC l (χ e ). (Def EPC) October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 30 / 49

Effect preconditions: connection to change sets Proof (ctd.) Inductive case 1, e = e 1 e n : l [e] s iff l [e 1 ] s [e n ] s (Def [e 1 e n ] s ) iff l [e ] s for some e {e 1,...,e n } iff s = EPC l (e ) for some e {e 1,...,e n } (IH) iff s = EPC l (e 1 ) EPC l (e n ) iff s = EPC l (e 1 e n ). (Def EPC) Inductive case 2, e = χ e : l [χ e ] s iff l [e ] s and s = χ (Def [χ e ] s ) iff s = EPC l (e ) and s = χ (IH) iff s = EPC l (e ) χ iff s = EPC l (χ e ). (Def EPC) October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 30 / 49

Effect preconditions: connection to normal form Remark: EPC vs. effect normal form Notice that in terms of EPC a (e), any operator χ,e can be expressed in effect normal form as χ, a A((EPC a (e) a) (EPC a (e) a)), where A is the set of all state variables. October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 31 / 49

Regressing state variables The formula EPC a (e) (a EPC a (e)) expresses the value of state variable a A after applying o in terms of values of state variables before applying o. Either: a became true, or a was true before and it did not become false. October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 32 / 49

Regressing state variables: examples Let e = (b a) (c a) b d. variable x a b c d EPC x (e) (x EPC x (e)) b (a c) (b ) (c ) c (d ) October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 33 / 49

Regressing state variables: correctness Lemma (B) Let a be a state variable, o = χ,e an operator, s a state, and s = app o (s). Then s = EPC a (e) (a EPC a (e)) if and only if s = a. Proof. ( ): Assume s = EPC a (e) (a EPC a (e)). Do a case analysis on the two disjuncts. 1 Assume that s = EPC a (e). By Lemma A, we have a [e] s and hence s = a. 2 Assume that s = a EPC a (e). By Lemma A, we have a / [e] s. Hence a remains true in s. October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 34 / 49

Regressing state variables: correctness Lemma (B) Let a be a state variable, o = χ,e an operator, s a state, and s = app o (s). Then s = EPC a (e) (a EPC a (e)) if and only if s = a. Proof. ( ): Assume s = EPC a (e) (a EPC a (e)). Do a case analysis on the two disjuncts. 1 Assume that s = EPC a (e). By Lemma A, we have a [e] s and hence s = a. 2 Assume that s = a EPC a (e). By Lemma A, we have a / [e] s. Hence a remains true in s. October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 34 / 49

Regressing state variables: correctness Lemma (B) Let a be a state variable, o = χ,e an operator, s a state, and s = app o (s). Then s = EPC a (e) (a EPC a (e)) if and only if s = a. Proof. ( ): Assume s = EPC a (e) (a EPC a (e)). Do a case analysis on the two disjuncts. 1 Assume that s = EPC a (e). By Lemma A, we have a [e] s and hence s = a. 2 Assume that s = a EPC a (e). By Lemma A, we have a / [e] s. Hence a remains true in s. October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 34 / 49

Regressing state variables: correctness Lemma (B) Let a be a state variable, o = χ,e an operator, s a state, and s = app o (s). Then s = EPC a (e) (a EPC a (e)) if and only if s = a. Proof. ( ): Assume s = EPC a (e) (a EPC a (e)). Do a case analysis on the two disjuncts. 1 Assume that s = EPC a (e). By Lemma A, we have a [e] s and hence s = a. 2 Assume that s = a EPC a (e). By Lemma A, we have a / [e] s. Hence a remains true in s. October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 34 / 49

Regressing state variables: correctness Proof (ctd.) ( ): We showed that if the formula is true in s, then a is true in s. For the second part, we show that if the formula is false in s, then a is false in s. So assume s = EPC a (e) (a EPC a (e)). Then s = EPC a (e) ( a EPC a (e)) (de Morgan). Case distinction: a is true or a is false in s. 1 Assume that s = a. Now s = EPC a (e) because s = a EPC a (e). Hence by Lemma A a [e] s and we get s = a. 2 Assume that s = a. Because s = EPC a (e), by Lemma A we get a / [e] s and hence s = a. Therefore in both cases s = a. October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 35 / 49

Regressing state variables: correctness Proof (ctd.) ( ): We showed that if the formula is true in s, then a is true in s. For the second part, we show that if the formula is false in s, then a is false in s. So assume s = EPC a (e) (a EPC a (e)). Then s = EPC a (e) ( a EPC a (e)) (de Morgan). Case distinction: a is true or a is false in s. 1 Assume that s = a. Now s = EPC a (e) because s = a EPC a (e). Hence by Lemma A a [e] s and we get s = a. 2 Assume that s = a. Because s = EPC a (e), by Lemma A we get a / [e] s and hence s = a. Therefore in both cases s = a. October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 35 / 49

Regressing state variables: correctness Proof (ctd.) ( ): We showed that if the formula is true in s, then a is true in s. For the second part, we show that if the formula is false in s, then a is false in s. So assume s = EPC a (e) (a EPC a (e)). Then s = EPC a (e) ( a EPC a (e)) (de Morgan). Case distinction: a is true or a is false in s. 1 Assume that s = a. Now s = EPC a (e) because s = a EPC a (e). Hence by Lemma A a [e] s and we get s = a. 2 Assume that s = a. Because s = EPC a (e), by Lemma A we get a / [e] s and hence s = a. Therefore in both cases s = a. October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 35 / 49

Regressing state variables: correctness Proof (ctd.) ( ): We showed that if the formula is true in s, then a is true in s. For the second part, we show that if the formula is false in s, then a is false in s. So assume s = EPC a (e) (a EPC a (e)). Then s = EPC a (e) ( a EPC a (e)) (de Morgan). Case distinction: a is true or a is false in s. 1 Assume that s = a. Now s = EPC a (e) because s = a EPC a (e). Hence by Lemma A a [e] s and we get s = a. 2 Assume that s = a. Because s = EPC a (e), by Lemma A we get a / [e] s and hence s = a. Therefore in both cases s = a. October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 35 / 49

Regressing state variables: correctness Proof (ctd.) ( ): We showed that if the formula is true in s, then a is true in s. For the second part, we show that if the formula is false in s, then a is false in s. So assume s = EPC a (e) (a EPC a (e)). Then s = EPC a (e) ( a EPC a (e)) (de Morgan). Case distinction: a is true or a is false in s. 1 Assume that s = a. Now s = EPC a (e) because s = a EPC a (e). Hence by Lemma A a [e] s and we get s = a. 2 Assume that s = a. Because s = EPC a (e), by Lemma A we get a / [e] s and hence s = a. Therefore in both cases s = a. October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 35 / 49

Regressing state variables: correctness Proof (ctd.) ( ): We showed that if the formula is true in s, then a is true in s. For the second part, we show that if the formula is false in s, then a is false in s. So assume s = EPC a (e) (a EPC a (e)). Then s = EPC a (e) ( a EPC a (e)) (de Morgan). Case distinction: a is true or a is false in s. 1 Assume that s = a. Now s = EPC a (e) because s = a EPC a (e). Hence by Lemma A a [e] s and we get s = a. 2 Assume that s = a. Because s = EPC a (e), by Lemma A we get a / [e] s and hence s = a. Therefore in both cases s = a. October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 35 / 49

Regressing state variables: correctness Proof (ctd.) ( ): We showed that if the formula is true in s, then a is true in s. For the second part, we show that if the formula is false in s, then a is false in s. So assume s = EPC a (e) (a EPC a (e)). Then s = EPC a (e) ( a EPC a (e)) (de Morgan). Case distinction: a is true or a is false in s. 1 Assume that s = a. Now s = EPC a (e) because s = a EPC a (e). Hence by Lemma A a [e] s and we get s = a. 2 Assume that s = a. Because s = EPC a (e), by Lemma A we get a / [e] s and hence s = a. Therefore in both cases s = a. October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 35 / 49

: general definition We base the definition of regression on formulae EPC l (e). Definition (general regression) Let ϕ be a propositional formula and o = χ,e an operator. The regression of ϕ with respect to o is where regr o (ϕ) = χ ϕ r κ 1 ϕ r is obtained from ϕ by replacing each a A by EPC a (e) (a EPC a (e)), and 2 κ = a A (EPC a (e) EPC a (e)). The formula κ expresses that operators are only applicable in states where their change sets are consistent. October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 36 / 49

examples regr a,b (b) a ( (b )) a regr a,b (b c d) a ( (b )) ( (c )) ( (d )) a c d regr a,c b (b) a (c (b )) a (c b) regr a,(c b) (b b) (b) a (c (b b)) (c b) a c b regr a,(c b) (d b) (b) a (c (b d)) (c d) a (c b) (c d) ( c d) a (c b) d October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 37 / 49

example: binary counter ( b 0 b 0 ) (( b 1 b 0 ) (b 1 b 0 )) (( b 2 b 1 b 0 ) (b 2 b 1 b 0 )) EPC b2 (e)= b 2 b 1 b 0 EPC b1 (e)= b 1 b 0 EPC b0 (e)= b 0 EPC b2 (e)= EPC b1 (e)= b 2 b 1 b 0 EPC b0 (e)=( b 1 b 0 ) ( b 2 b 1 b 0 ) ( b 1 b 2 ) b 0 replaces state variables as follows: b 2 by ( b 2 b 1 b 0 ) (b 2 ) (b 1 b 0 ) b 2 b 1 by ( b 1 b 0 ) (b 1 ( b 2 b 1 b 0 )) ( b 1 b 0 ) (b 1 (b 2 b 0 )) b 0 by b 0 (b 0 (( b 1 b 2 ) b 0 )) b 0 (b 1 b 2 ) October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 38 / 49

General regression: correctness Theorem (correctness of regr o (ϕ)) Let ϕ be a formula, o an operator and s a state. Then s = regr o (ϕ) iff o is applicable in s and app o (s) = ϕ. Proof. Let o = χ,e. Recall that regr o (ϕ) = χ ϕ r κ, where ϕ r and κ are as defined previously. If o is inapplicable in s, then s = χ κ, both sides of the iff condition are false, and we are done. Hence, we only further consider states s where o is applicable. Let s := app o (s). We know that s = χ κ (because o is applicable), so the iff condition we need to prove simplifies to: s = ϕ r iff s = ϕ. October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 39 / 49

General regression: correctness Theorem (correctness of regr o (ϕ)) Let ϕ be a formula, o an operator and s a state. Then s = regr o (ϕ) iff o is applicable in s and app o (s) = ϕ. Proof. Let o = χ,e. Recall that regr o (ϕ) = χ ϕ r κ, where ϕ r and κ are as defined previously. If o is inapplicable in s, then s = χ κ, both sides of the iff condition are false, and we are done. Hence, we only further consider states s where o is applicable. Let s := app o (s). We know that s = χ κ (because o is applicable), so the iff condition we need to prove simplifies to: s = ϕ r iff s = ϕ. October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 39 / 49

General regression: correctness Theorem (correctness of regr o (ϕ)) Let ϕ be a formula, o an operator and s a state. Then s = regr o (ϕ) iff o is applicable in s and app o (s) = ϕ. Proof. Let o = χ,e. Recall that regr o (ϕ) = χ ϕ r κ, where ϕ r and κ are as defined previously. If o is inapplicable in s, then s = χ κ, both sides of the iff condition are false, and we are done. Hence, we only further consider states s where o is applicable. Let s := app o (s). We know that s = χ κ (because o is applicable), so the iff condition we need to prove simplifies to: s = ϕ r iff s = ϕ. October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 39 / 49

General regression: correctness Theorem (correctness of regr o (ϕ)) Let ϕ be a formula, o an operator and s a state. Then s = regr o (ϕ) iff o is applicable in s and app o (s) = ϕ. Proof. Let o = χ,e. Recall that regr o (ϕ) = χ ϕ r κ, where ϕ r and κ are as defined previously. If o is inapplicable in s, then s = χ κ, both sides of the iff condition are false, and we are done. Hence, we only further consider states s where o is applicable. Let s := app o (s). We know that s = χ κ (because o is applicable), so the iff condition we need to prove simplifies to: s = ϕ r iff s = ϕ. October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 39 / 49

General regression: correctness Proof (ctd.) To show: s = ϕ r iff s = ϕ. We show that for all formulae ψ, s = ψ r iff s = ψ, where ψ r is ψ with every a A replaced by EPC a (e) (a EPC a (e)). The proof is by structural induction on ψ. Induction hypothesis s = ψ r if and only if s = ψ. Base cases 1 & 2 ψ = or ψ = : trivial, as ψ r = ψ. Base case 3 ψ = a for some a A: Then ψ r = EPC a (e) (a EPC a (e)). By Lemma B, s = ψ r iff s = ψ. October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 40 / 49

General regression: correctness Proof (ctd.) To show: s = ϕ r iff s = ϕ. We show that for all formulae ψ, s = ψ r iff s = ψ, where ψ r is ψ with every a A replaced by EPC a (e) (a EPC a (e)). The proof is by structural induction on ψ. Induction hypothesis s = ψ r if and only if s = ψ. Base cases 1 & 2 ψ = or ψ = : trivial, as ψ r = ψ. Base case 3 ψ = a for some a A: Then ψ r = EPC a (e) (a EPC a (e)). By Lemma B, s = ψ r iff s = ψ. October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 40 / 49

General regression: correctness Proof (ctd.) To show: s = ϕ r iff s = ϕ. We show that for all formulae ψ, s = ψ r iff s = ψ, where ψ r is ψ with every a A replaced by EPC a (e) (a EPC a (e)). The proof is by structural induction on ψ. Induction hypothesis s = ψ r if and only if s = ψ. Base cases 1 & 2 ψ = or ψ = : trivial, as ψ r = ψ. Base case 3 ψ = a for some a A: Then ψ r = EPC a (e) (a EPC a (e)). By Lemma B, s = ψ r iff s = ψ. October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 40 / 49

General regression: correctness Proof (ctd.) To show: s = ϕ r iff s = ϕ. We show that for all formulae ψ, s = ψ r iff s = ψ, where ψ r is ψ with every a A replaced by EPC a (e) (a EPC a (e)). The proof is by structural induction on ψ. Induction hypothesis s = ψ r if and only if s = ψ. Base cases 1 & 2 ψ = or ψ = : trivial, as ψ r = ψ. Base case 3 ψ = a for some a A: Then ψ r = EPC a (e) (a EPC a (e)). By Lemma B, s = ψ r iff s = ψ. October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 40 / 49

General regression: correctness Proof (ctd.) To show: s = ϕ r iff s = ϕ. We show that for all formulae ψ, s = ψ r iff s = ψ, where ψ r is ψ with every a A replaced by EPC a (e) (a EPC a (e)). The proof is by structural induction on ψ. Induction hypothesis s = ψ r if and only if s = ψ. Base cases 1 & 2 ψ = or ψ = : trivial, as ψ r = ψ. Base case 3 ψ = a for some a A: Then ψ r = EPC a (e) (a EPC a (e)). By Lemma B, s = ψ r iff s = ψ. October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 40 / 49

General regression: correctness Proof (ctd.) Inductive case 1 ψ = ψ : Inductive case 2 ψ = ψ ψ : s = ψ r iff s = ( ψ ) r iff s = (ψ r) iff s = ψ r iff (IH) s = ψ iff s = ψ iff s = ψ s = ψ r iff s = (ψ ψ ) r iff s = ψ r ψ r iff s = ψ r or s = ψ r iff (IH, twice) s = ψ or s = ψ iff s = ψ ψ iff s = ψ Inductive case 3 ψ = ψ ψ : Very similar to inductive case 2, just with instead of and and instead of or. October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 41 / 49

General regression: correctness Proof (ctd.) Inductive case 1 ψ = ψ : Inductive case 2 ψ = ψ ψ : s = ψ r iff s = ( ψ ) r iff s = (ψ r) iff s = ψ r iff (IH) s = ψ iff s = ψ iff s = ψ s = ψ r iff s = (ψ ψ ) r iff s = ψ r ψ r iff s = ψ r or s = ψ r iff (IH, twice) s = ψ or s = ψ iff s = ψ ψ iff s = ψ Inductive case 3 ψ = ψ ψ : Very similar to inductive case 2, just with instead of and and instead of or. October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 41 / 49

General regression: correctness Proof (ctd.) Inductive case 1 ψ = ψ : Inductive case 2 ψ = ψ ψ : s = ψ r iff s = ( ψ ) r iff s = (ψ r) iff s = ψ r iff (IH) s = ψ iff s = ψ iff s = ψ s = ψ r iff s = (ψ ψ ) r iff s = ψ r ψ r iff s = ψ r or s = ψ r iff (IH, twice) s = ψ or s = ψ iff s = ψ ψ iff s = ψ Inductive case 3 ψ = ψ ψ : Very similar to inductive case 2, just with instead of and and instead of or. October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 41 / 49

Emptiness and subsumption testing The following two tests are useful when performing regression searches to avoid exploring unpromising branches: Test that regr o (ϕ) does not represent the empty set (which would mean that search is in a dead end). For example, regr a, p (p) a. Test that regr o (ϕ) does not represent a subset of ϕ (which would make the problem harder than before). For example, regr b,c (a) a b. Both of these problems are NP-hard. October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 42 / 49

Formula growth The formula regr o1 (regr o2 (...regr on 1 (regr on (ϕ)))) may have size O( ϕ o 1 o 2... o n 1 o n ), i. e., the product of the sizes of ϕ and the operators. worst-case exponential size O(m n ) Logical simplifications ϕ, ϕ ϕ, ϕ ϕ, ϕ a ϕ a ϕ[ /a], a ϕ a ϕ[ /a], a ϕ a ϕ[ /a], a ϕ a ϕ[ /a] idempotency, absorption, commutativity, associativity,... October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 43 / 49

Restricting formula growth in search trees Problem very big formulae obtained by regression Cause disjunctivity in the (NNF) formulae (formulae without disjunctions easily convertible to small formulae l 1 l n where l i are literals and n is at most the number of state variables.) Idea handle disjunctivity when generating search trees October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 44 / 49

Unrestricted regression: search tree example Unrestricted regression: do not treat disjunctions specially Goal γ = a b, initial state I = {a 0,b 0,c 0}. a a a, b ( c a) b b, c a b, c a γ = a b ( c a) a a, b ( c a) b October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 45 / 49

Full splitting: search tree example Full splitting: always remove all disjunctivity Goal γ = a b, initial state I = {a 0,b 0,c 0}. ( c a) b in DNF: ( c b) (a b) split into c b and a b a a (duplicate of γ) a b a, b b, c a γ = a b c b c a b, c a a, b c b b, c a October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 46 / 49

General splitting strategies Alternatives: 1 Do nothing (unrestricted regression). 2 Always eliminate all disjunctivity (full splitting). 3 Reduce disjunctivity if formula becomes too big. Discussion: With unrestricted regression the formulae may have size that is exponential in the number of state variables. With full splitting search tree can be exponentially bigger than without splitting. The third option lies between these two extremes. October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 47 / 49

(Classical) search is a very important planning approach. -based planning algorithms differ along many dimensions, including search direction (forward, backward) what each search node represents (a state, a set of states, an operator sequence) search proceeds forwards from the initial state. If we use duplicate detection, each search node corresponds to a unique state. If we do not use duplicate detection, each search node corresponds to a unique operator sequence. October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 48 / 49

(ctd.) search proceeds backwards from the goal. Each search node corresponds to a set of states represented by a formula. is simple for operators. The theory for general regression is more complex. When applying regression in practice, additional considerations such as when and how to perform splitting come into play. October 30th, 2013 B. Nebel, R. Mattmüller AI Planning 49 / 49