Deterministic Planning and State-space Search. Erez Karpas

Size: px
Start display at page:

Download "Deterministic Planning and State-space Search. Erez Karpas"

Transcription

1 Planning and Search IE&M Technion to our

2 Problem-solving agents Restricted form of general agent def Simple-Problem-Solving-Agent (problem): state, some description of the current world state seq, an action sequence, initially empty state Update-State(state, percept) if seq is empty then seq SearchForSolution(problem) action SelectAction(seq, state) seq Remainder(seq, action) return action Note: offline problem solving; solution executed eyes closed. to our

3 Example: A trip in Romania Task On holiday in Romania; currently in Arad. Flight leaves tomorrow from Bucharest Arad Zerind Timisoara 111 Oradea Dobreta 151 Lugoj Mehadia 120 Sibiu 99 Craiova Fagaras 80 Rimnicu Vilcea 97 Pitesti Neamt Giurgiu 87 Bucharest Iasi Urziceni Vaslui Hirsova 86 Eforie to our

4 Example: A trip in Romania Task On holiday in Romania; currently in Arad. Flight leaves tomorrow from Bucharest Formulate goal: be in Bucharest Formulate problem: states: various cities actions: drive between cities Find solution: sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest You probably know how to solve this one, but... to our

5 Example: A trip in Romania Task On holiday in Romania; currently in Arad. Flight leaves tomorrow from Bucharest Formulate goal: be in Bucharest Formulate problem: states: various cities actions: drive between cities Find solution: sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest You probably know how to solve this one, but... to our

6 A sample of Route selection (from Arad to Bucharest) Solving 15-puzzle (or Rubik s cube, or...) Selecting and ordering movements of an elevator or a crane Production lines control Autonomous robots Crisis management... What is in common? to our

7 Planning Problems What is in common? All these deal with action selection or control Arad Some notion of problem state (Often) specification of initial state and/or goal state Legal moves or actions that allow transition from states to other states 71 Zerind Timisoara 111 Oradea Dobreta 151 Lugoj Mehadia 120 Sibiu 99 Craiova Fagaras 80 Rimnicu Vilcea 97 Pitesti Neamt Giurgiu 87 Bucharest Iasi Urziceni Vaslui Hirsova 86 Eforie to our

8 Planning Problems For now focus on: Plans (aka solutions) are sequences of moves that transform the initial state into the goal state Intuitively, not all solutions are equally desirable What is our task? 1 Find out whether there is a solution 2 Find any solution 3 Find an optimal solution 4 Find near-optimal solution 5 Fixed amount of time, find best solution possible 6 Find solution that satisfy property ℵ (what is ℵ? you choose!) to our

9 Planning Problems What is our task? 1 Find out whether there is a solution 2 Find any solution 3 Find an optimal solution 4 Find near-optimal solution 5 Fixed amount of time, find best solution possible 6 Find solution that satisfy property ℵ (what is ℵ? you choose!) While all these tasks sound related, they are very different. The best suited techniques are almost disjoint. to our

10 Planning vs. Scheduling Planning can be seen as extending scheduling Scheduling Deciding when to perform a given set of actions Time constraints Resource constraints Global constraints (e.g., regulatory issues) Objective functions Planning Deciding what actions to perform (and when) to achieve a given objective same issues The difference comes in play in solution techniques, and actually even in worst-case time/space complexity to our

11 Three Approaches to Problem Solving 1 programming: specify control by hand 2 model-oriented solving: specify problem by hand, derive control automatically 3 learning: learn control from experience Planning is a form of model-oriented problem solving to our

12 State Model for finite state space S an initial state s 0 S a set S G S of goal states applicable actions A(s) A for s S a transition function s = f(a, s) for a A(s) a cost function c : A [0, ) A solution is a sequence of applicable actions that maps s 0 into S G An optimal solution minimizes c to our

13 State Model for finite state space S an initial state s 0 S a set S G S of goal states applicable actions A(s) A for s S a transition function s = f(a, s) for a A(s) a cost function c : A [0, ) A solution is a sequence of applicable actions that maps s 0 into S G An optimal solution minimizes c to our

14 Small example C goal states B to D A E F initial state our

15 Why is difficult? Solutions to are paths from an initial state to a goal state in the transition graph Dijkstra s algorithm solves this problem in O( V log ( V ) + E ) Can we go home?? to our

16 Why is difficult? Solutions to are paths from an initial state to a goal state in the transition graph Dijkstra s algorithm solves this problem in O( V log ( V ) + E ) Can we go home?? Not exactly V of our interest is 10 10, 10 20, ,... But do we need such values of V?! to our

17 Why is difficult? Generalize the earlier example: Five locations, three robot carts, 100 containers, three piles V The number of atoms in the universe is only about The state space in our example is more than times as large (uppss...) Good news: Very much not hopeless! to our

18 Why is difficult? Generalize the earlier example: Five locations, three robot carts, 100 containers, three piles V The number of atoms in the universe is only about The state space in our example is more than times as large (uppss...) Good news: Very much not hopeless! to our

19 Different classes of Properties dynamics:, non or probabilistic observability: full, partial, or none horizon: finite or infinite... Side comment... It is not that are easy It is not even clear that they are too far from modeling real-world well! to Dynamics Observability Objectives our

20 Properties of the world: dynamics dynamics Action + current state uniquely determine successor state. Non dynamics For each action and current state there may be several possible successor states. Probabilistic dynamics For each action and current state there is a probability distribution over possible successor states. to Dynamics Observability Objectives our

21 Determistic dynamics example Moving objects with a robotic hand: move the green block onto the blue block. to Dynamics Observability Objectives our

22 Nondetermistic dynamics example Moving objects with an unreliable robotic hand: move the green block onto the blue block. to Dynamics Observability Objectives our

23 Probabilistic dynamics example Moving objects with an unreliable robotic hand: move the green block onto the blue block. to p = 0.1 p = 0.9 Dynamics Observability Objectives our

24 Properties of the world: observability Full observability Observations/sensing determine current world state uniquely. Partial observability Observations determine current world state only partially: we only know that current state is one of several possible ones. No observability There are no observations to narrow down possible current states. However, can use knowledge of action dynamics to deduce which states we might be in. Consequence: If observability is not full, must represent the knowledge an agent has. to Dynamics Observability Objectives our

25 What difference does observability make? Camera A Camera B to Goal Dynamics Observability Objectives our

26 Different objectives 1 Reach a goal state. Example: Earn 500 NIS. 2 Stay in goal states indefinitely (infinite horizon). Example: Never allow the bank account balance to be negative. 3 Maximize the probability of reaching a goal state. Example: To be able to finance buying a house by 2018 study hard and save money. 4 Collect the maximal expected rewards/minimal expected costs (infinite horizon). Example: Maximize your future income to Dynamics Observability Objectives our

27 Example: vacuum world state space graph L S R L S R L S R L L S S R states: 0/1 dirt and robot locations (ignore dirt amounts etc.) actions: Left, Right, Suck, NoOp goal test: no dirt path cost: 1 per action (0 for NoOp) R L L S S R R L S R to Dynamics Observability Objectives our

28 Vacuum world: different classes of Single-state, start in #5. Solution? to Dynamics Observability Objectives our

29 Vacuum world: different classes of Single-state, start in #5. Solution? [Right, Suck] Conformant, start in {1, 2, 3, 4, 5, 6, 7, 8} e.g., Right goes to {2, 4, 6, 8}. Solution? to Dynamics Observability Objectives our

30 Vacuum world: different classes of Single-state, start in #5. Solution? [Right, Suck] Conformant, start in {1, 2, 3, 4, 5, 6, 7, 8} e.g., Right goes to {2, 4, 6, 8}. Solution? [Right, Suck, Left, Suck] Conditional, start in {5, 7} Murphy s Law: Suck can dirty a clean carpet Local sensing: dirt, location only. Solution? [Right, if dirt then Suck] to Dynamics Observability Objectives our

31 transition A transition system is if there is only one initial state and all actions are. Hence all future states of the world are completely predictable. Definition ( transition system) A transition system is S, s 0, A, S G, c where finite state space S an initial state s 0 S a set S G S of goal states applicable actions A(s) A for s S a transition function s = f(a, s) for a A(s) a cost function c : A [0, ) to our

32 Blocks world The rules of the game Location on the table does not matter. Location on a block does not matter. to our

33 Blocks world The rules of the game At most one block may be below a block. to At most one block may be on top of a block. our

34 Blocks world The transition graph for three blocks to our

35 Blocks world Properties blocks states Finding a solution is polynomial time in the number of blocks (move everything onto the table and then construct the goal configuration). 2 Finding a shortest solution is NP-hard... to our

36 Planning Route selection (from Arad to Bucharest) Solving 15-puzzle (or Rubik s cube, or...) Selecting and ordering movements of an elevator or a crane Production lines control Autonomous robots Crisis management... to our

37 : one of the big success stories of AI Must carefully distinguish two different : satisficing : any solution is OK (although shorter solutions typically preferred) optimal : plans must have shortest possible length Both are often solved by, but: details are very different almost no overlap between good techniques for satisficing and good techniques for optimal many that are trivial for satisficing planners are impossibly hard for optimal planners to our

38 Planning by forward : progression Progression: Computing the successor state f(s, a) of a state s with respect to an action a. Progression planners find solutions by forward : start from initial state iteratively pick a previously generated state and progress it through an action, generating a new state solution found when a goal state generated Arad to Sibiu Timisoara Zerind our Arad Fagaras Oradea Rimnicu Vilcea Arad Lugoj Arad Oradea

39 Planning by forward : progression Progression: Computing the successor state f(s, a) of a state s with respect to an action a. Progression planners find solutions by forward : start from initial state iteratively pick a previously generated state and progress it through an action, generating a new state solution found when a goal state generated Arad to Sibiu Timisoara Zerind our Arad Fagaras Oradea Rimnicu Vilcea Arad Lugoj Arad Oradea

40 Planning by forward : progression Progression: Computing the successor state f(s, a) of a state s with respect to an action a. Progression planners find solutions by forward : start from initial state iteratively pick a previously generated state and progress it through an action, generating a new state solution found when a goal state generated Arad to Sibiu Timisoara Zerind our Arad Fagaras Oradea Rimnicu Vilcea Arad Lugoj Arad Oradea

41 Implementation: states vs. nodes In, one distinguishes: states s states (vertices) of the transition system nodes σ states plus information on where/when/how they are encountered during What is in a node? Different algorithms store different information in a node σ, but typical information includes: state(σ): associated state parent(σ): pointer to node from which σ is reached action(σ): an action leading from state(parent(σ)) to state(σ) g(σ): cost of σ (length of path from the root node) For the root node, parent(σ) and action(σ) are undefined. to our

42 Implementation: states vs. nodes In, one distinguishes: states s states (vertices) of the transition system nodes σ states plus information on where/when/how they are encountered during What is in a node? Arad to Sibiu Timisoara Zerind Arad Fagaras Oradea Rimnicu Vilcea Arad Lugoj Arad Oradea our

43 Required ingredients for A general algorithm can be applied to any transition system for which we can define the following three operations: init(): generate the initial state is-goal(s): test if a given state is a goal state succ(s): generate the set of successor states of state s, along with the actions through which they are reached (represented as pairs o, s of actions and states) Together, these three functions form a space (a very similar notion to a transition system). to our

44 Classification of algorithms uninformed vs. heuristic : uninformed algorithms only use the basic ingredients for general algorithms heuristic algorithms additionally use heuristic functions which estimate how close a node is to the goal systematic vs. local : systematic algorithms consider a large number of nodes simultaneously local algorithms work with one (or a few) candidate solutions ( nodes) at a time not a black-and-white distinction; there are crossbreeds (e. g., enforced hill-climbing) to our

45 Common procedures for algorithms Before we describe the different algorithms, we introduce three procedures used by all of them: make-root-node: Create a node without parent. make-node: Create a node for a state generated as the successor of another state. extract-solution: Extract a solution from a node representing a goal state. to our

46 Procedure make-root-node make-root-node: Create a node without parent. Procedure make-root-node def make-root-node(s): σ := new node state(σ) := s parent(σ) := undefined action(σ) := undefined g(σ) := 0 return σ to our

47 Procedure make-node make-node: Create a node for a state generated as the successor of another state. Procedure make-node def make-node(σ, o, s): σ := new node state(σ ) := s parent(σ ) := σ action(σ ) := o g(σ ) := g(σ) + 1 return σ to our

48 Procedure extract-solution extract-solution: Extract a solution from a node representing a goal state. Procedure extract-solution def extract-solution(σ): solution := new list while parent(σ) is defined: solution.push-front(action(σ)) σ := parent(σ) return solution to our

49 algorithms Popular uninformed systematic algorithms: breadth-first depth-first iterated depth-first Popular uninformed local algorithms: random walk to our BFS DFS

50 Breadth-first Breadth-first queue := new fifo-queue queue.push-back(make-root-node(init())) while not queue.empty(): σ = queue.pop-front() if is-goal(state(σ)): return extract-solution(σ) for each o, s succ(state(σ)): σ := make-node(σ, o, s) queue.push-back(σ ) return unsolvable A to our B C D E F G BFS DFS

51 Breadth-first Breadth-first queue := new fifo-queue queue.push-back(make-root-node(init())) while not queue.empty(): σ = queue.pop-front() if is-goal(state(σ)): return extract-solution(σ) for each o, s succ(state(σ)): σ := make-node(σ, o, s) queue.push-back(σ ) return unsolvable A to our B C D E F G BFS DFS

52 Breadth-first Breadth-first queue := new fifo-queue queue.push-back(make-root-node(init())) while not queue.empty(): σ = queue.pop-front() if is-goal(state(σ)): return extract-solution(σ) for each o, s succ(state(σ)): σ := make-node(σ, o, s) queue.push-back(σ ) return unsolvable A to our B C D E F G BFS DFS

53 Breadth-first Breadth-first queue := new fifo-queue queue.push-back(make-root-node(init())) while not queue.empty(): σ = queue.pop-front() if is-goal(state(σ)): return extract-solution(σ) for each o, s succ(state(σ)): σ := make-node(σ, o, s) queue.push-back(σ ) return unsolvable A to our B C D E F G BFS DFS

54 Is it any good? Dimensions for evaluation completeness: always find a solution if one exists? time complexity: number of nodes generated/expanded space complexity: number of nodes in memory optimality: does it always find a least-cost solution? Time/space complexity measured in terms of b maximum branching factor of the tree d depth of the least-cost solution m maximum depth of the state space (may be ) Quiz: to our BFS DFS

55 Properties of breadth-first Complete Yes (if b is finite) Time 1 + b + b 2 + b b d + b(b d 1) = O(b d+1 ), i.e., exp(d) Space O(b d+1 ) (why?) Optimal Yes (if cost = 1 per step); can be generalized (how?) Space is the big problem; can easily generate nodes at 100MB/sec so 24hrs = 8640GB. to our BFS DFS

56 Breadth-first Breadth-first queue := new fifo-queue queue.push-back(make-root-node(init())) while not queue.empty(): σ = queue.pop-front() if is-goal(state(σ)): return extract-solution(σ) for each o, s succ(state(σ)): σ := make-node(σ, o, s) queue.push-back(σ ) return unsolvable Possible improvement: duplicate detection (see next slide). Another possible improvement: test if σ is a goal node; if so, terminate immediately. (We don t do this because it obscures the similarity to some of the later algorithms.) to our BFS DFS

57 Breadth-first with duplicate detection Breadth-first with duplicate detection queue := new fifo-queue queue.push-back(make-root-node(init())) closed := while not queue.empty(): σ = queue.pop-front() if state(σ) / closed: closed := closed {state(σ)} if is-goal(state(σ)): return extract-solution(σ) for each o, s succ(state(σ)): σ := make-node(σ, o, s) queue.push-back(σ ) return unsolvable to our BFS DFS

58 Duplicate detection: Worth the effort? Worst-case complexity Time 1 + b + b 2 + b b d + b(b d 1) = O(b d+1 ) Same! (why?) Space O(b d+1 ) Same! (why?) to our BFS DFS

59 Duplicate detection: Worth the effort? Worst-case complexity Time 1 + b + b 2 + b b d + b(b d 1) = O(b d+1 ) Same! (why?) Space O(b d+1 ) Same! (why?) Empirical reality Failure to detect repeated states can turn a linear problem into an exponential one! to A B C D C B C A C B C our BFS DFS

60 Depth-first Depth-first queue := new lifo-queue queue.push-back(make-root-node(init())) while not queue.empty(): σ = queue.pop-front() if is-goal(state(σ)): return extract-solution(σ) for each o, s succ(state(σ)): σ := make-node(σ, o, s) queue.push-front(σ ) return unsolvable B A D E F G H I J K L M N O C to our BFS DFS

61 Depth-first Depth-first queue := new lifo-queue queue.push-back(make-root-node(init())) while not queue.empty(): σ = queue.pop-front() if is-goal(state(σ)): return extract-solution(σ) for each o, s succ(state(σ)): σ := make-node(σ, o, s) queue.push-front(σ ) return unsolvable B A D E F G H I J K L M N O C to our BFS DFS

62 Depth-first Depth-first queue := new lifo-queue queue.push-back(make-root-node(init())) while not queue.empty(): σ = queue.pop-front() if is-goal(state(σ)): return extract-solution(σ) for each o, s succ(state(σ)): σ := make-node(σ, o, s) queue.push-front(σ ) return unsolvable B A D E F G H I J K L M N O C to our BFS DFS

63 Depth-first Depth-first queue := new lifo-queue queue.push-back(make-root-node(init())) while not queue.empty(): σ = queue.pop-front() if is-goal(state(σ)): return extract-solution(σ) for each o, s succ(state(σ)): σ := make-node(σ, o, s) queue.push-front(σ ) return unsolvable B A D E F G H I J K L M N O C to our BFS DFS

64 Depth-first Depth-first queue := new lifo-queue queue.push-back(make-root-node(init())) while not queue.empty(): σ = queue.pop-front() if is-goal(state(σ)): return extract-solution(σ) for each o, s succ(state(σ)): σ := make-node(σ, o, s) queue.push-front(σ ) return unsolvable B A D E F G H I J K L M N O C to our BFS DFS

65 Depth-first Depth-first queue := new lifo-queue queue.push-back(make-root-node(init())) while not queue.empty(): σ = queue.pop-front() if is-goal(state(σ)): return extract-solution(σ) for each o, s succ(state(σ)): σ := make-node(σ, o, s) queue.push-front(σ ) return unsolvable B A D E F G H I J K L M N O C to our BFS DFS

66 Depth-first Depth-first queue := new lifo-queue queue.push-back(make-root-node(init())) while not queue.empty(): σ = queue.pop-front() if is-goal(state(σ)): return extract-solution(σ) for each o, s succ(state(σ)): σ := make-node(σ, o, s) queue.push-front(σ ) return unsolvable B A D E F G H I J K L M N O C to our BFS DFS

67 Depth-first Depth-first queue := new lifo-queue queue.push-back(make-root-node(init())) while not queue.empty(): σ = queue.pop-front() if is-goal(state(σ)): return extract-solution(σ) for each o, s succ(state(σ)): σ := make-node(σ, o, s) queue.push-front(σ ) return unsolvable B A D E F G H I J K L M N O C to our BFS DFS

68 Depth-first Depth-first queue := new lifo-queue queue.push-back(make-root-node(init())) while not queue.empty(): σ = queue.pop-front() if is-goal(state(σ)): return extract-solution(σ) for each o, s succ(state(σ)): σ := make-node(σ, o, s) queue.push-front(σ ) return unsolvable B A D E F G H I J K L M N O C to our BFS DFS

69 Depth-first Depth-first queue := new lifo-queue queue.push-back(make-root-node(init())) while not queue.empty(): σ = queue.pop-front() if is-goal(state(σ)): return extract-solution(σ) for each o, s succ(state(σ)): σ := make-node(σ, o, s) queue.push-front(σ ) return unsolvable B A D E F G H I J K L M N O C to our BFS DFS

70 Depth-first Depth-first queue := new lifo-queue queue.push-back(make-root-node(init())) while not queue.empty(): σ = queue.pop-front() if is-goal(state(σ)): return extract-solution(σ) for each o, s succ(state(σ)): σ := make-node(σ, o, s) queue.push-front(σ ) return unsolvable B A D E F G H I J K L M N O C to our BFS DFS

71 Depth-first Depth-first queue := new lifo-queue queue.push-back(make-root-node(init())) while not queue.empty(): σ = queue.pop-front() if is-goal(state(σ)): return extract-solution(σ) for each o, s succ(state(σ)): σ := make-node(σ, o, s) queue.push-front(σ ) return unsolvable B A D E F G H I J K L M N O C to our BFS DFS

72 Is it any good? Dimensions for evaluation completeness: always find a solution if one exists? time complexity: number of nodes generated/expanded space complexity: number of nodes in memory optimality: does it always find a least-cost solution? Time/space complexity measured in terms of b maximum branching factor of the tree d depth of the least-cost solution m maximum depth of the state space (may be ) Quiz: to our BFS DFS

73 Properties of depth-first Complete No: fails in spaces with b = or m =, spaces with loops Possible improvement: duplicate detection along path complete in finite spaces Time O(b m ) terrible if m is much larger than d, but if solutions are dense, may be much faster than BFS Space O(bm) linear space! Optimal No (right?) to our BFS DFS

74 Random walk Random walk σ := make-root-node(init()) forever: if is-goal(state(σ)): return extract-solution(σ) Choose a random element o, s from succ(state(σ)). σ := make-node(σ, o, s) The algorithm usually does not find any solutions, unless almost every sequence of actions is a plan. Often, it runs indefinitely without making progress. It can also fail by reaching a dead end, a state with no successors. This is a weakness of many local approaches. to our BFS DFS

Problem solving and search. Chapter 3. Chapter 3 1

Problem solving and search. Chapter 3. Chapter 3 1 Problem solving and search Chapter 3 Chapter 3 1 eminders ssignment 0 due 5pm today ssignment 1 posted, due 2/9 Section 105 will move to 9-10am starting next week Chapter 3 2 Problem-solving agents Problem

More information

Artificial Intelligence Methods. Problem solving by searching

Artificial Intelligence Methods. Problem solving by searching rtificial Intelligence Methods Problem solving by searching In which we see how an agent can find a sequence of actions that achieves its goals, when no single action will do. Dr. Igor Trajkovski Dr. Igor

More information

CS:4420 Artificial Intelligence

CS:4420 Artificial Intelligence CS:4420 Artificial Intelligence Spring 2017 Problem Solving by Search Cesare Tinelli The University of Iowa Copyright 2004 17, Cesare Tinelli and Stuart ussell a a These notes were originally developed

More information

AI Programming CS S-06 Heuristic Search

AI Programming CS S-06 Heuristic Search AI Programming CS662-2013S-06 Heuristic Search David Galles Department of Computer Science University of San Francisco 06-0: Overview Heuristic Search - exploiting knowledge about the problem Heuristic

More information

Tree/Graph Search. q IDDFS-? B C. Search CSL452 - ARTIFICIAL INTELLIGENCE 2

Tree/Graph Search. q IDDFS-? B C. Search CSL452 - ARTIFICIAL INTELLIGENCE 2 Informed Search Tree/Graph Search q IDDFS-? A B C D E F G Search CSL452 - ARTIFICIAL INTELLIGENCE 2 Informed (Heuristic) Search q Heuristic problem specific knowledge o finds solutions more efficiently

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence Chapter 1 Chapter 1 1 Outline Course overview What is AI? A brief history The state of the art Chapter 1 2 intelligent agents search and game-playing logical systems planning systems

More information

22c:145 Artificial Intelligence

22c:145 Artificial Intelligence 22c:145 Artificial Intelligence Fall 2005 Problem Solving by Search Cesare Tinelli The University of Iowa Copyright 2001-05 Cesare Tinelli and Hantao Zhang. a a These notes are copyrighted material and

More information

Game Engineering CS F-12 Artificial Intelligence

Game Engineering CS F-12 Artificial Intelligence Game Engineering CS420-2011F-12 Artificial Intelligence David Galles Department of Computer Science University of San Francisco 12-0: Artifical Intelligence AI in games is a huge field Creating a believable

More information

State Space Representa,ons and Search Algorithms

State Space Representa,ons and Search Algorithms State Space Representa,ons and Search Algorithms CS171, Fall 2016 Introduc,on to Ar,ficial Intelligence Prof. Alexander Ihler Reading: R&N 3.1-3.4 Architectures for Intelligence Search? Determine how to

More information

Local search algorithms. Chapter 4, Sections 3 4 1

Local search algorithms. Chapter 4, Sections 3 4 1 Local search algorithms Chapter 4, Sections 3 4 Chapter 4, Sections 3 4 1 Outline Hill-climbing Simulated annealing Genetic algorithms (briefly) Local search in continuous spaces (very briefly) Chapter

More information

Solving Problems By Searching

Solving Problems By Searching Solving Problems By Searching Instructor: Dr. Wei Ding Fall 2010 1 Problem-Solving Agent Goal-based agents: considering future actions and the desirability of their outcomes. Decide what to do by finding

More information

Artificial Intelligence (Künstliche Intelligenz 1) Part II: Problem Solving

Artificial Intelligence (Künstliche Intelligenz 1) Part II: Problem Solving Kohlhase: Künstliche Intelligenz 1 76 July 12, 2018 Artificial Intelligence (Künstliche Intelligenz 1) Part II: Problem Solving Michael Kohlhase Professur für Wissensrepräsentation und -verarbeitung Informatik,

More information

Solutions. Dynamic Programming & Optimal Control ( ) Number of Problems: 4. Use only the provided sheets for your solutions.

Solutions. Dynamic Programming & Optimal Control ( ) Number of Problems: 4. Use only the provided sheets for your solutions. Final Exam January 26th, 2012 Dynamic Programming & Optimal Control (151-0563-01) Prof. R. D Andrea Solutions Exam Duration: 150 minutes Number of Problems: 4 Permitted aids: One A4 sheet of paper. Use

More information

Principles of AI Planning

Principles of AI Planning Principles of 5. Planning as search: progression and regression Malte Helmert and Bernhard Nebel Albert-Ludwigs-Universität Freiburg May 4th, 2010 Planning as (classical) search Introduction Classification

More information

Problem-Solving via Search Lecture 3

Problem-Solving via Search Lecture 3 Lecture 3 What is a search problem? How do search algorithms work and how do we evaluate their performance? 1 Agenda An example task Problem formulation Infrastructure for search algorithms Complexity

More information

Principles of AI Planning

Principles of AI Planning Principles of AI Planning 5. Planning as search: progression and regression Albert-Ludwigs-Universität Freiburg Bernhard Nebel and Robert Mattmüller October 30th, 2013 Introduction Classification Planning

More information

Last time: Summary. Last time: Summary

Last time: Summary. Last time: Summary 1 Last time: Summary Definition of AI? Turing Test? Intelligent Agents: Anything that can be viewed as perceiving its environment through sensors and acting upon that environment through its effectors

More information

CMU Lecture 4: Informed Search. Teacher: Gianni A. Di Caro

CMU Lecture 4: Informed Search. Teacher: Gianni A. Di Caro CMU 15-781 Lecture 4: Informed Search Teacher: Gianni A. Di Caro UNINFORMED VS. INFORMED Uninformed Can only generate successors and distinguish goals from non-goals Informed Strategies that can distinguish

More information

Searching in non-deterministic, partially observable and unknown environments

Searching in non-deterministic, partially observable and unknown environments Searching in non-deterministic, partially observable and unknown environments CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2014 Soleymani Artificial Intelligence:

More information

Pengju

Pengju Introduction to AI Chapter04 Beyond Classical Search Pengju Ren@IAIR Outline Steepest Descent (Hill-climbing) Simulated Annealing Evolutionary Computation Non-deterministic Actions And-OR search Partial

More information

CSC242: Intro to AI. Lecture 5. Tuesday, February 26, 13

CSC242: Intro to AI. Lecture 5. Tuesday, February 26, 13 CSC242: Intro to AI Lecture 5 CSUG Tutoring: bit.ly/csug-tutoring League of Legends LAN Party: Sat 2/2 @ 2PM in CSB 209 $2 to benefit Big Brothers Big Sisters bit.ly/urlol ULW Topics due to me by tomorrow

More information

Searching in non-deterministic, partially observable and unknown environments

Searching in non-deterministic, partially observable and unknown environments Searching in non-deterministic, partially observable and unknown environments CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2017 Soleymani Artificial Intelligence:

More information

Graph Search Howie Choset

Graph Search Howie Choset Graph Search Howie Choset 16-11 Outline Overview of Search Techniques A* Search Graphs Collection of Edges and Nodes (Vertices) A tree Grids Stacks and Queues Stack: First in, Last out (FILO) Queue: First

More information

Informed Search. Day 3 of Search. Chap. 4, Russel & Norvig. Material in part from

Informed Search. Day 3 of Search. Chap. 4, Russel & Norvig. Material in part from Informed Search Day 3 of Search Chap. 4, Russel & Norvig Material in part from http://www.cs.cmu.edu/~awm/tutorials Uninformed Search Complexity N = Total number of states B = Average number of successors

More information

A.I.: Beyond Classical Search

A.I.: Beyond Classical Search A.I.: Beyond Classical Search Random Sampling Trivial Algorithms Generate a state randomly Random Walk Randomly pick a neighbor of the current state Both algorithms asymptotically complete. Overview Previously

More information

Informed Search. Chap. 4. Breadth First. O(Min(N,B L )) BFS. Search. have same cost BIBFS. Bi- Direction. O(Min(N,2B L/2 )) BFS. have same cost UCS

Informed Search. Chap. 4. Breadth First. O(Min(N,B L )) BFS. Search. have same cost BIBFS. Bi- Direction. O(Min(N,2B L/2 )) BFS. have same cost UCS Informed Search Chap. 4 Material in part from http://www.cs.cmu.edu/~awm/tutorials Uninformed Search Complexity N = Total number of states B = Average number of successors (branching factor) L = Length

More information

CSC242: Artificial Intelligence. Lecture 4 Local Search

CSC242: Artificial Intelligence. Lecture 4 Local Search CSC242: Artificial Intelligence Lecture 4 Local Search Upper Level Writing Topics due to me by next class! First draft due Mar 4 Goal: final paper 15 pages +/- 2 pages 12 pt font, 1.5 line spacing Get

More information

Local search and agents

Local search and agents Artificial Intelligence Local search and agents Instructor: Fabrice Popineau [These slides adapted from Stuart Russell, Dan Klein and Pieter Abbeel @ai.berkeley.edu] Local search algorithms In many optimization

More information

Markov Decision Processes Chapter 17. Mausam

Markov Decision Processes Chapter 17. Mausam Markov Decision Processes Chapter 17 Mausam Planning Agent Static vs. Dynamic Fully vs. Partially Observable Environment What action next? Deterministic vs. Stochastic Perfect vs. Noisy Instantaneous vs.

More information

IS-ZC444: ARTIFICIAL INTELLIGENCE

IS-ZC444: ARTIFICIAL INTELLIGENCE IS-ZC444: ARTIFICIAL INTELLIGENCE Lecture-07: Beyond Classical Search Dr. Kamlesh Tiwari Assistant Professor Department of Computer Science and Information Systems, BITS Pilani, Pilani, Jhunjhunu-333031,

More information

Artificial Intelligence Prof. Dr. Jürgen Dix

Artificial Intelligence Prof. Dr. Jürgen Dix Artificial Intelligence Prof. Dr. Jürgen Dix Computational Intelligence Group TU Clausthal Clausthal, SS 2012 Time and place: Tuesday and Wednesday 10 12 in Multimediahörsaal (Tannenhöhe) Exercises: See

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence Prof. Dr. Jürgen Dix Computational Intelligence Group TU Clausthal Clausthal, SS 2014 Prof. Dr. Jürgen Dix Clausthal, SS 2014 1 Time and place: Tuesday and Wednesday 10 12 in Multimediahörsaal

More information

CS 4100 // artificial intelligence. Recap/midterm review!

CS 4100 // artificial intelligence. Recap/midterm review! CS 4100 // artificial intelligence instructor: byron wallace Recap/midterm review! Attribution: many of these slides are modified versions of those distributed with the UC Berkeley CS188 materials Thanks

More information

Planning and search. Lecture 6: Search with non-determinism and partial observability

Planning and search. Lecture 6: Search with non-determinism and partial observability Planning and search Lecture 6: Search with non-determinism and partial observability Lecture 6: Search with non-determinism and partial observability 1 Today s lecture Non-deterministic actions. AND-OR

More information

Learning in State-Space Reinforcement Learning CIS 32

Learning in State-Space Reinforcement Learning CIS 32 Learning in State-Space Reinforcement Learning CIS 32 Functionalia Syllabus Updated: MIDTERM and REVIEW moved up one day. MIDTERM: Everything through Evolutionary Agents. HW 2 Out - DUE Sunday before the

More information

ICS 252 Introduction to Computer Design

ICS 252 Introduction to Computer Design ICS 252 fall 2006 Eli Bozorgzadeh Computer Science Department-UCI References and Copyright Textbooks referred [Mic94] G. De Micheli Synthesis and Optimization of Digital Circuits McGraw-Hill, 1994. [CLR90]

More information

Search' How"do"we"plan"our"holiday?" How"do"we"plan"our"holiday?" Many"problems"can"be"solved"by"search:"

Search' Howdoweplanourholiday? Howdoweplanourholiday? Manyproblemscanbesolvedbysearch: CSC384"Introduc0on"to"Ar0ficial"Intelligence" Search OneofthemostbasictechniquesinAI UnderlyingsubBmoduleinmostAIsystems Cansolvemanyproblemsthathumansarenot goodat(achievingsuperbhumanperformance) Veryusefulasageneralalgorithmictechniquefor

More information

Parsing with Context-Free Grammars

Parsing with Context-Free Grammars Parsing with Context-Free Grammars Berlin Chen 2005 References: 1. Natural Language Understanding, chapter 3 (3.1~3.4, 3.6) 2. Speech and Language Processing, chapters 9, 10 NLP-Berlin Chen 1 Grammars

More information

Intelligent Agents. Formal Characteristics of Planning. Ute Schmid. Cognitive Systems, Applied Computer Science, Bamberg University

Intelligent Agents. Formal Characteristics of Planning. Ute Schmid. Cognitive Systems, Applied Computer Science, Bamberg University Intelligent Agents Formal Characteristics of Planning Ute Schmid Cognitive Systems, Applied Computer Science, Bamberg University Extensions to the slides for chapter 3 of Dana Nau with contributions by

More information

State Space Search Problems

State Space Search Problems State Space Search Problems Lecture Heuristic Search Stefan Edelkamp 1 Overview Different state space formalisms, including labelled, implicit and explicit weighted graph representations Alternative formalisms:

More information

Methods for finding optimal configurations

Methods for finding optimal configurations CS 1571 Introduction to AI Lecture 9 Methods for finding optimal configurations Milos Hauskrecht milos@cs.pitt.edu 5329 Sennott Square Search for the optimal configuration Optimal configuration search:

More information

New rubric: AI in the news

New rubric: AI in the news New rubric: AI in the news 3 minutes Headlines this week: Silicon Valley Business Journal: Apple on hiring spreee for AI experts Forbes: Toyota Invests $50 Million In Artificial Intelligence Research For

More information

CSE 4502/5717 Big Data Analytics Spring 2018; Homework 1 Solutions

CSE 4502/5717 Big Data Analytics Spring 2018; Homework 1 Solutions CSE 502/5717 Big Data Analytics Spring 2018; Homework 1 Solutions 1. Consider the following algorithm: for i := 1 to α n log e n do Pick a random j [1, n]; If a[j] = a[j + 1] or a[j] = a[j 1] then output:

More information

Principles of AI Planning

Principles of AI Planning Principles of 7. State-space search: relaxed Malte Helmert Albert-Ludwigs-Universität Freiburg November 18th, 2008 A simple heuristic for deterministic planning STRIPS (Fikes & Nilsson, 1971) used the

More information

Markov Decision Processes Chapter 17. Mausam

Markov Decision Processes Chapter 17. Mausam Markov Decision Processes Chapter 17 Mausam Planning Agent Static vs. Dynamic Fully vs. Partially Observable Environment What action next? Deterministic vs. Stochastic Perfect vs. Noisy Instantaneous vs.

More information

CSE 473: Artificial Intelligence Spring 2014

CSE 473: Artificial Intelligence Spring 2014 CSE 473: Artificial Intelligence Spring 2014 Hanna Hajishirzi Problem Spaces and Search slides from Dan Klein, Stuart Russell, Andrew Moore, Dan Weld, Pieter Abbeel, Luke Zettelmoyer Outline Agents that

More information

Search and Lookahead. Bernhard Nebel, Julien Hué, and Stefan Wölfl. June 4/6, 2012

Search and Lookahead. Bernhard Nebel, Julien Hué, and Stefan Wölfl. June 4/6, 2012 Search and Lookahead Bernhard Nebel, Julien Hué, and Stefan Wölfl Albert-Ludwigs-Universität Freiburg June 4/6, 2012 Search and Lookahead Enforcing consistency is one way of solving constraint networks:

More information

Local and Online search algorithms

Local and Online search algorithms Local and Online search algorithms Chapter 4 Chapter 4 1 Outline Local search algorithms Hill-climbing Simulated annealing Genetic algorithms Searching with non-deterministic actions Searching with partially/no

More information

CISC681/481 AI Midterm Exam

CISC681/481 AI Midterm Exam CISC681/481 AI Midterm Exam You have from 12:30 to 1:45pm to complete the following four questions. Use the back of the page if you need more space. Good luck! 1. Definitions (3 points each) Define the

More information

Finding optimal configurations ( combinatorial optimization)

Finding optimal configurations ( combinatorial optimization) CS 1571 Introduction to AI Lecture 10 Finding optimal configurations ( combinatorial optimization) Milos Hauskrecht milos@cs.pitt.edu 539 Sennott Square Constraint satisfaction problem (CSP) Constraint

More information

Principles of AI Planning

Principles of AI Planning Principles of 7. Planning as search: relaxed Malte Helmert and Bernhard Nebel Albert-Ludwigs-Universität Freiburg June 8th, 2010 How to obtain a heuristic STRIPS heuristic Relaxation and abstraction A

More information

Principles of Computing, Carnegie Mellon University. The Limits of Computing

Principles of Computing, Carnegie Mellon University. The Limits of Computing The Limits of Computing Intractability Limits of Computing Announcement Final Exam is on Friday 9:00am 10:20am Part 1 4:30pm 6:10pm Part 2 If you did not fill in the course evaluations please do it today.

More information

CS 580: Algorithm Design and Analysis. Jeremiah Blocki Purdue University Spring 2018

CS 580: Algorithm Design and Analysis. Jeremiah Blocki Purdue University Spring 2018 CS 580: Algorithm Design and Analysis Jeremiah Blocki Purdue University Spring 2018 Chapter 9 PSPACE: A Class of Problems Beyond NP Slides by Kevin Wayne. Copyright @ 2005 Pearson-Addison Wesley. All rights

More information

Homework 2: MDPs and Search

Homework 2: MDPs and Search Graduate Artificial Intelligence 15-780 Homework 2: MDPs and Search Out on February 15 Due on February 29 Problem 1: MDPs [Felipe, 20pts] Figure 1: MDP for Problem 1. States are represented by circles

More information

A* Search. 1 Dijkstra Shortest Path

A* Search. 1 Dijkstra Shortest Path A* Search Consider the eight puzzle. There are eight tiles numbered 1 through 8 on a 3 by three grid with nine locations so that one location is left empty. We can move by sliding a tile adjacent to the

More information

CS188: Artificial Intelligence, Fall 2009 Written 2: MDPs, RL, and Probability

CS188: Artificial Intelligence, Fall 2009 Written 2: MDPs, RL, and Probability CS188: Artificial Intelligence, Fall 2009 Written 2: MDPs, RL, and Probability Due: Thursday 10/15 in 283 Soda Drop Box by 11:59pm (no slip days) Policy: Can be solved in groups (acknowledge collaborators)

More information

Constraint satisfaction search. Combinatorial optimization search.

Constraint satisfaction search. Combinatorial optimization search. CS 1571 Introduction to AI Lecture 8 Constraint satisfaction search. Combinatorial optimization search. Milos Hauskrecht milos@cs.pitt.edu 539 Sennott Square Constraint satisfaction problem (CSP) Objective:

More information

Incremental Path Planning

Incremental Path Planning Incremental Path Planning Dynamic and Incremental A* Prof. Brian Williams (help from Ihsiang Shu) 6./6.8 Cognitive Robotics February 7 th, Outline Review: Optimal Path Planning in Partially Known Environments.

More information

CSE 431/531: Analysis of Algorithms. Dynamic Programming. Lecturer: Shi Li. Department of Computer Science and Engineering University at Buffalo

CSE 431/531: Analysis of Algorithms. Dynamic Programming. Lecturer: Shi Li. Department of Computer Science and Engineering University at Buffalo CSE 431/531: Analysis of Algorithms Dynamic Programming Lecturer: Shi Li Department of Computer Science and Engineering University at Buffalo Paradigms for Designing Algorithms Greedy algorithm Make a

More information

CS360 Homework 12 Solution

CS360 Homework 12 Solution CS360 Homework 12 Solution Constraint Satisfaction 1) Consider the following constraint satisfaction problem with variables x, y and z, each with domain {1, 2, 3}, and constraints C 1 and C 2, defined

More information

Announcements. CompSci 102 Discrete Math for Computer Science. Chap. 3.1 Algorithms. Specifying Algorithms

Announcements. CompSci 102 Discrete Math for Computer Science. Chap. 3.1 Algorithms. Specifying Algorithms CompSci 102 Discrete Math for Computer Science Announcements Read for next time Chap. 3.1-3.3 Homework 3 due Tuesday We ll finish Chapter 2 first today February 7, 2012 Prof. Rodger Chap. 3.1 Algorithms

More information

Undirected Graphs. V = { 1, 2, 3, 4, 5, 6, 7, 8 } E = { 1-2, 1-3, 2-3, 2-4, 2-5, 3-5, 3-7, 3-8, 4-5, 5-6 } n = 8 m = 11

Undirected Graphs. V = { 1, 2, 3, 4, 5, 6, 7, 8 } E = { 1-2, 1-3, 2-3, 2-4, 2-5, 3-5, 3-7, 3-8, 4-5, 5-6 } n = 8 m = 11 Undirected Graphs Undirected graph. G = (V, E) V = nodes. E = edges between pairs of nodes. Captures pairwise relationship between objects. Graph size parameters: n = V, m = E. V = {, 2, 3,,,, 7, 8 } E

More information

Algorithms for Playing and Solving games*

Algorithms for Playing and Solving games* Algorithms for Playing and Solving games* Andrew W. Moore Professor School of Computer Science Carnegie Mellon University www.cs.cmu.edu/~awm awm@cs.cmu.edu 412-268-7599 * Two Player Zero-sum Discrete

More information

The exam is closed book, closed calculator, and closed notes except your one-page crib sheet.

The exam is closed book, closed calculator, and closed notes except your one-page crib sheet. CS 188 Spring 2017 Introduction to Artificial Intelligence Midterm V2 You have approximately 80 minutes. The exam is closed book, closed calculator, and closed notes except your one-page crib sheet. Mark

More information

Part V. Matchings. Matching. 19 Augmenting Paths for Matchings. 18 Bipartite Matching via Flows

Part V. Matchings. Matching. 19 Augmenting Paths for Matchings. 18 Bipartite Matching via Flows Matching Input: undirected graph G = (V, E). M E is a matching if each node appears in at most one Part V edge in M. Maximum Matching: find a matching of maximum cardinality Matchings Ernst Mayr, Harald

More information

Introduction to Arti Intelligence

Introduction to Arti Intelligence Introduction to Arti Intelligence cial Lecture 4: Constraint satisfaction problems 1 / 48 Constraint satisfaction problems: Today Exploiting the representation of a state to accelerate search. Backtracking.

More information

9. PSPACE 9. PSPACE. PSPACE complexity class quantified satisfiability planning problem PSPACE-complete

9. PSPACE 9. PSPACE. PSPACE complexity class quantified satisfiability planning problem PSPACE-complete Geography game Geography. Alice names capital city c of country she is in. Bob names a capital city c' that starts with the letter on which c ends. Alice and Bob repeat this game until one player is unable

More information

CS188: Artificial Intelligence, Fall 2009 Written 2: MDPs, RL, and Probability

CS188: Artificial Intelligence, Fall 2009 Written 2: MDPs, RL, and Probability CS188: Artificial Intelligence, Fall 2009 Written 2: MDPs, RL, and Probability Due: Thursday 10/15 in 283 Soda Drop Box by 11:59pm (no slip days) Policy: Can be solved in groups (acknowledge collaborators)

More information

CMU-Q Lecture 3: Search algorithms: Informed. Teacher: Gianni A. Di Caro

CMU-Q Lecture 3: Search algorithms: Informed. Teacher: Gianni A. Di Caro CMU-Q 5-38 Lecure 3: Search algorihms: Informed Teacher: Gianni A. Di Caro UNINFORMED VS. INFORMED SEARCH Sraegy How desirable is o be in a cerain inermediae sae for he sake of (effecively) reaching a

More information

Beyond Classical Search

Beyond Classical Search Beyond Classical Search Chapter 4 (Adapted from Stuart Russel, Dan Klein, and others. Thanks guys!) 1 Outline Hill-climbing Simulated annealing Genetic algorithms (briefly) Local search in continuous spaces

More information

Chapter 6. Dynamic Programming. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

Chapter 6. Dynamic Programming. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved. Chapter 6 Dynamic Programming Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved. 1 6.8 Shortest Paths Shortest Paths Shortest path problem. Given a directed graph G = (V,

More information

CS 188: Artificial Intelligence

CS 188: Artificial Intelligence CS 188: Artificial Intelligence Adversarial Search II Instructor: Anca Dragan University of California, Berkeley [These slides adapted from Dan Klein and Pieter Abbeel] Minimax Example 3 12 8 2 4 6 14

More information

Data Structures in Java

Data Structures in Java Data Structures in Java Lecture 21: Introduction to NP-Completeness 12/9/2015 Daniel Bauer Algorithms and Problem Solving Purpose of algorithms: find solutions to problems. Data Structures provide ways

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence Time and place: Tuesday and Wednesday 10 12 in Multimediahörsaal (Tannenhöhe) Exercises: Approximately every other Tuesday. Prof. Dr. Jürgen Dix Department of Informatics TU Clausthal

More information

9. PSPACE. PSPACE complexity class quantified satisfiability planning problem PSPACE-complete

9. PSPACE. PSPACE complexity class quantified satisfiability planning problem PSPACE-complete 9. PSPACE PSPACE complexity class quantified satisfiability planning problem PSPACE-complete Lecture slides by Kevin Wayne Copyright 2005 Pearson-Addison Wesley Copyright 2013 Kevin Wayne http://www.cs.princeton.edu/~wayne/kleinberg-tardos

More information

Session-Based Queueing Systems

Session-Based Queueing Systems Session-Based Queueing Systems Modelling, Simulation, and Approximation Jeroen Horters Supervisor VU: Sandjai Bhulai Executive Summary Companies often offer services that require multiple steps on the

More information

P, NP, NP-Complete. Ruth Anderson

P, NP, NP-Complete. Ruth Anderson P, NP, NP-Complete Ruth Anderson A Few Problems: Euler Circuits Hamiltonian Circuits Intractability: P and NP NP-Complete What now? Today s Agenda 2 Try it! Which of these can you draw (trace all edges)

More information

Chapter 3 Deterministic planning

Chapter 3 Deterministic planning Chapter 3 Deterministic planning In this chapter we describe a number of algorithms for solving the historically most important and most basic type of planning problem. Two rather strong simplifying assumptions

More information

Theory of Computer Science

Theory of Computer Science Theory of Computer Science E1. Complexity Theory: Motivation and Introduction Malte Helmert University of Basel May 18, 2016 Overview: Course contents of this course: logic How can knowledge be represented?

More information

Friday Four Square! Today at 4:15PM, Outside Gates

Friday Four Square! Today at 4:15PM, Outside Gates P and NP Friday Four Square! Today at 4:15PM, Outside Gates Recap from Last Time Regular Languages DCFLs CFLs Efficiently Decidable Languages R Undecidable Languages Time Complexity A step of a Turing

More information

Ch 01. Analysis of Algorithms

Ch 01. Analysis of Algorithms Ch 01. Analysis of Algorithms Input Algorithm Output Acknowledgement: Parts of slides in this presentation come from the materials accompanying the textbook Algorithm Design and Applications, by M. T.

More information

Theory of Computer Science. Theory of Computer Science. E1.1 Motivation. E1.2 How to Measure Runtime? E1.3 Decision Problems. E1.

Theory of Computer Science. Theory of Computer Science. E1.1 Motivation. E1.2 How to Measure Runtime? E1.3 Decision Problems. E1. Theory of Computer Science May 18, 2016 E1. Complexity Theory: Motivation and Introduction Theory of Computer Science E1. Complexity Theory: Motivation and Introduction Malte Helmert University of Basel

More information

Chapter 9. PSPACE: A Class of Problems Beyond NP. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

Chapter 9. PSPACE: A Class of Problems Beyond NP. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved. Chapter 9 PSPACE: A Class of Problems Beyond NP Slides by Kevin Wayne. Copyright @ 2005 Pearson-Addison Wesley. All rights reserved. 1 Geography Game Geography. Alice names capital city c of country she

More information

Theory of Computation Chapter 1: Introduction

Theory of Computation Chapter 1: Introduction Theory of Computation Chapter 1: Introduction Guan-Shieng Huang Sep. 20, 2006 Feb. 9, 2009 0-0 Text Book Computational Complexity, by C. H. Papadimitriou, Addison-Wesley, 1994. 1 References Garey, M.R.

More information

Artificial Intelligence Prof. Dr. Jürgen Dix

Artificial Intelligence Prof. Dr. Jürgen Dix Artificial Intelligence Prof. Dr. Jürgen Dix Department of Informatics TU Clausthal Summer 2007 Time and place: Mondays and Tuesdays 10 12 in Multimediahörsaal (Tannenhöhe) Exercises: Approximately every

More information

Methods for finding optimal configurations

Methods for finding optimal configurations S 2710 oundations of I Lecture 7 Methods for finding optimal configurations Milos Hauskrecht milos@pitt.edu 5329 Sennott Square S 2710 oundations of I Search for the optimal configuration onstrain satisfaction

More information

CS 6901 (Applied Algorithms) Lecture 3

CS 6901 (Applied Algorithms) Lecture 3 CS 6901 (Applied Algorithms) Lecture 3 Antonina Kolokolova September 16, 2014 1 Representative problems: brief overview In this lecture we will look at several problems which, although look somewhat similar

More information

Local Search & Optimization

Local Search & Optimization Local Search & Optimization CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2018 Soleymani Artificial Intelligence: A Modern Approach, 3 rd Edition, Chapter 4 Some

More information

Breadth First Search, Dijkstra s Algorithm for Shortest Paths

Breadth First Search, Dijkstra s Algorithm for Shortest Paths CS 374: Algorithms & Models of Computation, Spring 2017 Breadth First Search, Dijkstra s Algorithm for Shortest Paths Lecture 17 March 1, 2017 Chandra Chekuri (UIUC) CS374 1 Spring 2017 1 / 42 Part I Breadth

More information

Algorithms: Lecture 12. Chalmers University of Technology

Algorithms: Lecture 12. Chalmers University of Technology Algorithms: Lecture 1 Chalmers University of Technology Today s Topics Shortest Paths Network Flow Algorithms Shortest Path in a Graph Shortest Path Problem Shortest path network. Directed graph G = (V,

More information

Introduction to Reinforcement Learning

Introduction to Reinforcement Learning CSCI-699: Advanced Topics in Deep Learning 01/16/2019 Nitin Kamra Spring 2019 Introduction to Reinforcement Learning 1 What is Reinforcement Learning? So far we have seen unsupervised and supervised learning.

More information

Lecture 1 : Data Compression and Entropy

Lecture 1 : Data Compression and Entropy CPS290: Algorithmic Foundations of Data Science January 8, 207 Lecture : Data Compression and Entropy Lecturer: Kamesh Munagala Scribe: Kamesh Munagala In this lecture, we will study a simple model for

More information

Ch01. Analysis of Algorithms

Ch01. Analysis of Algorithms Ch01. Analysis of Algorithms Input Algorithm Output Acknowledgement: Parts of slides in this presentation come from the materials accompanying the textbook Algorithm Design and Applications, by M. T. Goodrich

More information

Introduction to Computer Science and Programming for Astronomers

Introduction to Computer Science and Programming for Astronomers Introduction to Computer Science and Programming for Astronomers Lecture 8. István Szapudi Institute for Astronomy University of Hawaii March 7, 2018 Outline Reminder 1 Reminder 2 3 4 Reminder We have

More information

Automated Program Verification and Testing 15414/15614 Fall 2016 Lecture 3: Practical SAT Solving

Automated Program Verification and Testing 15414/15614 Fall 2016 Lecture 3: Practical SAT Solving Automated Program Verification and Testing 15414/15614 Fall 2016 Lecture 3: Practical SAT Solving Matt Fredrikson mfredrik@cs.cmu.edu October 17, 2016 Matt Fredrikson SAT Solving 1 / 36 Review: Propositional

More information

CS224W: Social and Information Network Analysis Jure Leskovec, Stanford University

CS224W: Social and Information Network Analysis Jure Leskovec, Stanford University CS224W: Social and Information Network Analysis Jure Leskovec, Stanford University http://cs224w.stanford.edu 10/24/2012 Jure Leskovec, Stanford CS224W: Social and Information Network Analysis, http://cs224w.stanford.edu

More information

Algorithms. NP -Complete Problems. Dong Kyue Kim Hanyang University

Algorithms. NP -Complete Problems. Dong Kyue Kim Hanyang University Algorithms NP -Complete Problems Dong Kyue Kim Hanyang University dqkim@hanyang.ac.kr The Class P Definition 13.2 Polynomially bounded An algorithm is said to be polynomially bounded if its worst-case

More information

Advanced topic: Space complexity

Advanced topic: Space complexity Advanced topic: Space complexity CSCI 3130 Formal Languages and Automata Theory Siu On CHAN Chinese University of Hong Kong Fall 2016 1/28 Review: time complexity We have looked at how long it takes to

More information

Algorithm Design Strategies V

Algorithm Design Strategies V Algorithm Design Strategies V Joaquim Madeira Version 0.0 October 2016 U. Aveiro, October 2016 1 Overview The 0-1 Knapsack Problem Revisited The Fractional Knapsack Problem Greedy Algorithms Example Coin

More information

CSE 573. Markov Decision Processes: Heuristic Search & Real-Time Dynamic Programming. Slides adapted from Andrey Kolobov and Mausam

CSE 573. Markov Decision Processes: Heuristic Search & Real-Time Dynamic Programming. Slides adapted from Andrey Kolobov and Mausam CSE 573 Markov Decision Processes: Heuristic Search & Real-Time Dynamic Programming Slides adapted from Andrey Kolobov and Mausam 1 Stochastic Shortest-Path MDPs: Motivation Assume the agent pays cost

More information