CSE 473: Artificial Intelligence Spring 2014

Similar documents
CS 4100 // artificial intelligence. Recap/midterm review!

CSE 473: Artificial Intelligence Spring 2014

Outline. CSE 573: Artificial Intelligence Autumn Agent. Partial Observability. Markov Decision Process (MDP) 10/31/2012

CS 188: Artificial Intelligence Spring Announcements

CSEP 573: Artificial Intelligence

Problem-Solving via Search Lecture 3

Today s Outline. Recap: MDPs. Bellman Equations. Q-Value Iteration. Bellman Backup 5/7/2012. CSE 473: Artificial Intelligence Reinforcement Learning

CSE 473: Artificial Intelligence Autumn 2011

Announcements. CS 188: Artificial Intelligence Spring Probability recap. Outline. Bayes Nets: Big Picture. Graphical Model Notation

CS 188: Artificial Intelligence Spring Announcements

CS 188: Artificial Intelligence Spring 2007

Bayesian Networks. Vibhav Gogate The University of Texas at Dallas

CS 188: Artificial Intelligence Spring Announcements

CS221 Practice Midterm

Announcements. CS 188: Artificial Intelligence Spring Mini-Contest Winners. Today. GamesCrafters. Adversarial Games

Bayesian Networks. Vibhav Gogate The University of Texas at Dallas

Product rule. Chain rule

Announcements. CS 188: Artificial Intelligence Fall Adversarial Games. Computing Minimax Values. Evaluation Functions. Recap: Resource Limits

CS 188: Artificial Intelligence Fall Announcements

Outline. CSE 573: Artificial Intelligence Autumn Bayes Nets: Big Picture. Bayes Net Semantics. Hidden Markov Models. Example Bayes Net: Car

Last time: Summary. Last time: Summary

CSE 473: Artificial Intelligence Probability Review à Markov Models. Outline

Bayes Nets: Independence

Local search and agents

Outline. CSE 473: Artificial Intelligence Spring Bayes Nets: Big Picture. Bayes Net Semantics. Hidden Markov Models. Example Bayes Net: Car

CSE 473: Artificial Intelligence Autumn Topics

Reinforcement Learning Wrap-up

Artificial Intelligence Bayes Nets: Independence

CS 188: Artificial Intelligence

CSE 473: Ar+ficial Intelligence

Announcements. CS 188: Artificial Intelligence Spring Bayes Net Semantics. Probabilities in BNs. All Conditional Independences

CS 188: Artificial Intelligence. Bayes Nets

Hidden Markov Models. Vibhav Gogate The University of Texas at Dallas

CS 662 Sample Midterm

CS 188 Introduction to Fall 2007 Artificial Intelligence Midterm

CS 188: Artificial Intelligence Spring Announcements

CS 343: Artificial Intelligence

CS 343: Artificial Intelligence

CS 343: Artificial Intelligence

Examination Artificial Intelligence Module Intelligent Interaction Design December 2014

Problem solving and search. Chapter 3. Chapter 3 1

CSE 473: Ar+ficial Intelligence. Hidden Markov Models. Bayes Nets. Two random variable at each +me step Hidden state, X i Observa+on, E i

CSE 573: Artificial Intelligence

Final. Introduction to Artificial Intelligence. CS 188 Spring You have approximately 2 hours and 50 minutes.

CSE 473: Ar+ficial Intelligence. Example. Par+cle Filters for HMMs. An HMM is defined by: Ini+al distribu+on: Transi+ons: Emissions:

CS 188: Artificial Intelligence. Outline

EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS

Price: $25 (incl. T-Shirt, morning tea and lunch) Visit:

Introduction to Spring 2009 Artificial Intelligence Midterm Exam

Hidden Markov Models. Hal Daumé III. Computer Science University of Maryland CS 421: Introduction to Artificial Intelligence 19 Apr 2012

Graph Search Howie Choset

Introduction to Arti Intelligence

CS:4420 Artificial Intelligence

CS 570: Machine Learning Seminar. Fall 2016

Probabilistic Models. Models describe how (a portion of) the world works

Minimax strategies, alpha beta pruning. Lirong Xia

CSEP 573: Artificial Intelligence

Evaluation for Pacman. CS 188: Artificial Intelligence Fall Iterative Deepening. α-β Pruning Example. α-β Pruning Pseudocode.

CS 188: Artificial Intelligence Fall 2008

CS 188: Artificial Intelligence

Recap: Bayes Nets. CS 473: Artificial Intelligence Bayes Nets: Independence. Conditional Independence. Bayes Nets. Independence in a BN

CS221 Practice Midterm #2 Solutions

Informed Search. Chap. 4. Breadth First. O(Min(N,B L )) BFS. Search. have same cost BIBFS. Bi- Direction. O(Min(N,2B L/2 )) BFS. have same cost UCS

Informed Search. Day 3 of Search. Chap. 4, Russel & Norvig. Material in part from

Search and Lookahead. Bernhard Nebel, Julien Hué, and Stefan Wölfl. June 4/6, 2012

Announcements. Inference. Mid-term. Inference by Enumeration. Reminder: Alarm Network. Introduction to Artificial Intelligence. V22.

Final exam of ECE 457 Applied Artificial Intelligence for the Spring term 2007.

CISC681/481 AI Midterm Exam

CS 343: Artificial Intelligence

Decision Diagrams and Dynamic Programming

Bayes Nets III: Inference

CS 188: Artificial Intelligence

Introduction to Spring 2006 Artificial Intelligence Practice Final

Announcements. CS 188: Artificial Intelligence Spring Classification. Today. Classification overview. Case-Based Reasoning

DATA Search I 魏忠钰. 复旦大学大数据学院 School of Data Science, Fudan University. March 7 th, 2018

CSE 473: Artificial Intelligence

Kecerdasan Buatan M. Ali Fauzi

Approximate Q-Learning. Dan Weld / University of Washington

CS 188: Artificial Intelligence Spring Announcements

CSE 473: Ar+ficial Intelligence. Probability Recap. Markov Models - II. Condi+onal probability. Product rule. Chain rule.

CS 343: Artificial Intelligence

Local Search & Optimization

Learning: Binary Perceptron. Examples: Perceptron. Separable Case. In the space of feature vectors

Midterm, Fall 2003

CS 188: Artificial Intelligence Spring 2007

Artificial Intelligence Methods. Problem solving by searching

Searching in non-deterministic, partially observable and unknown environments

CS 4700: Foundations of Artificial Intelligence Homework 2 Solutions. Graph. You can get to the same single state through multiple paths.

Probability Hal Daumé III. Computer Science University of Maryland CS 421: Introduction to Artificial Intelligence 27 Mar 2012

CS 380: ARTIFICIAL INTELLIGENCE

CS 188: Artificial Intelligence Spring 2009

CS 188: Artificial Intelligence

Logical Agent & Propositional Logic

Partially Observable Markov Decision Processes (POMDPs)

CMU Lecture 4: Informed Search. Teacher: Gianni A. Di Caro

Decision Trees. Tirgul 5

Principles of AI Planning

CS 188: Artificial Intelligence Spring Today

16.4 Multiattribute Utility Functions

State Space Representa,ons and Search Algorithms

Transcription:

CSE 473: Artificial Intelligence Spring 2014 Hanna Hajishirzi Problem Spaces and Search slides from Dan Klein, Stuart Russell, Andrew Moore, Dan Weld, Pieter Abbeel, Luke Zettelmoyer

Outline Agents that Plan Ahead Search Problems Uninformed Search Methods (part review for some) Depth-First Search Breadth-First Search Uniform-Cost Search Heuristic Search Methods (new for all) Best First / Greedy Search

Review: Agents An agent: Perceives and acts Selects ac2ons that maximize its u2lity func2on Has a goal Environment: Input and output to the agent Agent Sensors? Actuators Percepts Actions Environment Search -- the environment is: fully observable, single agent, deterministic, static, discrete

Reflex agents: Choose action based on current percept (and maybe memory) Do not consider the future consequences of their actions Act on how the world IS Can a reflex agent achieve goals? Reflex Agents

Goal Based Agents Goal-based agents: Plan ahead Ask what if Decisions based on (hypothesized) consequences of actions Must have a model of how the world evolves in response to actions Act on how the world WOULD BE

Search thru a Input: Set of states Successor Function [and costs - default to 1.0] Start state Goal state [test] Output: Problem Space / State Space Path: start a state satisfying goal test [May require shortest path] [Sometimes just need state passing test]

Example: Simplified Pac-Man Input: A state space A successor function N, 1.0 A start state E, 1.0 A goal test Output:

Ex: Route Planning: Romania à Bucharest Input: Set of states Operators [and costs] Start state Goal state (test) Output:

Example: N Queens Input: Set of states Q Q Q Operators [and costs] Q Start state Goal state (test) Output

Algebraic Simplification Input: Set of states Operators [and costs] Start state Goal state (test) Output:

What is in State Space? A world state includes every details of the environment orld#state#includes#every#last#detail#of#the#enviro A search state includes only details needed for planning Problem: Pathing States: {x,y} locations Actions: NSEW moves Successor: update location Goal: is (x,y) End? Problem: Eat-all-dots States: {(x,y), dot booleans} Actions: NSEW moves Successor: update location and dot boolean Goal: dots all false?

State Space Sizes? World states: Pacman positions: 10 x 12 = 120 Pacman facing: up, down, left, right Food Count: 30 Ghost positions: 12

State Space Sizes? How many? World State: 120*(2 30 )*(12 2 )*4 States for Pathing: 120 States for eat-all-dots: 120*(2 30 )

Quiz:#Safe#Passage#! Problem:#eat#all#dots#while#keeping#the#ghosts#permaJscared#! What#does#the#state#space#have#to#specify?#! (agent#posi)on,#dot#booleans,#power#pellet#booleans,#remaining#scared#)me)#

State Space Graphs State space graph: Each node is a state The successor function is represented by arcs Edges may be labeled with costs We can rarely build this graph in memory (so we don t)

Search Trees N, 1.0 E, 1.0 A search tree: Start state at the root node Children correspond to successors Nodes contain states, correspond to PLANS to those states Edges are labeled with actions and costs For most problems, we can never actually build the whole tree

Example: Tree Search State Graph: b a c G S d h e f What is the search tree? p q r Ridiculously tiny search graph for a tiny search problem

State Graphs vs. Search Trees S b a d c h e f G Each NODE in in the search tree is an entire PATH in the problem graph. p q r S d e p We construct both on demand and we construct as little as possible. b a c a p h e q r f h r p q f q c G q q c a G a

States vs. Nodes Nodes in state space graphs are problem states Represent an abstracted state of the world Have successors, can be goal / non-goal, have multiple predecessors Nodes in search trees are plans Represent a plan (sequence of actions) which results in the node s state Have a problem state and one parent, a path length, a depth & a cost The same problem state may be achieved by multiple search tree nodes Problem States Search Nodes Parent Depth 5 Action Node Depth 6

Quiz:#State#Graphs#vs.#Search#Trees# Consider#this#4Jstate#graph:## How#big#is#its#search#tree#(from#S)?# a S G b Important:#Lots#of#repeated#structure#in#the#search#tree!#

Building Search Trees Search: Expand out possible plans Maintain a fringe of unexpanded plans Try to expand as few tree nodes as possible

General Tree Search Important ideas: Fringe Expansion Exploration strategy Detailed pseudocode is in the book! Main question: which fringe nodes to explore?