Announcements. CS 188: Artificial Intelligence Spring Mini-Contest Winners. Today. GamesCrafters. Adversarial Games

Similar documents
Announcements. CS 188: Artificial Intelligence Fall Adversarial Games. Computing Minimax Values. Evaluation Functions. Recap: Resource Limits

CS 188: Artificial Intelligence Fall Announcements

Evaluation for Pacman. CS 188: Artificial Intelligence Fall Iterative Deepening. α-β Pruning Example. α-β Pruning Pseudocode.

CS 188: Artificial Intelligence Fall 2008

CSE 573: Artificial Intelligence

CS 188: Artificial Intelligence

Minimax strategies, alpha beta pruning. Lirong Xia

CS 4100 // artificial intelligence. Recap/midterm review!

CS 188: Artificial Intelligence Spring 2007

CS 188 Introduction to Fall 2007 Artificial Intelligence Midterm

CS 188: Artificial Intelligence Spring Today

Adversarial Search & Logic and Reasoning

Final. Introduction to Artificial Intelligence. CS 188 Spring You have approximately 2 hours and 50 minutes.

Today s Outline. Recap: MDPs. Bellman Equations. Q-Value Iteration. Bellman Backup 5/7/2012. CSE 473: Artificial Intelligence Reinforcement Learning

Algorithms for Playing and Solving games*

Announcements. CS 188: Artificial Intelligence Spring Probability recap. Outline. Bayes Nets: Big Picture. Graphical Model Notation

IS-ZC444: ARTIFICIAL INTELLIGENCE

CS 188: Artificial Intelligence Spring Announcements

CSE 473: Artificial Intelligence Spring 2014

CS 188: Artificial Intelligence Spring Announcements

Summary. Agenda. Games. Intelligence opponents in game. Expected Value Expected Max Algorithm Minimax Algorithm Alpha-beta Pruning Simultaneous Game

Game playing. Chapter 6. Chapter 6 1

CS 188: Artificial Intelligence Spring 2009

A Problem Involving Games. Paccioli s Solution. Problems for Paccioli: Small Samples. n / (n + m) m / (n + m)

CS 570: Machine Learning Seminar. Fall 2016

CS 188: Artificial Intelligence Fall 2009

CS 188: Artificial Intelligence Spring Announcements

Outline. CSE 573: Artificial Intelligence Autumn Agent. Partial Observability. Markov Decision Process (MDP) 10/31/2012

CS221 Practice Midterm #2 Solutions

Adversarial Search. Christos Papaloukas, Iosif Angelidis. University of Athens November 2017

An AI-ish view of Probability, Conditional Probability & Bayes Theorem

10/18/2017. An AI-ish view of Probability, Conditional Probability & Bayes Theorem. Making decisions under uncertainty.

Discrete Probability and State Estimation

Alpha-Beta Pruning: Algorithm and Analysis

Reinforcement Learning Wrap-up

Notes on induction proofs and recursive definitions

Introduction to Fall 2008 Artificial Intelligence Midterm Exam

CSL302/612 Artificial Intelligence End-Semester Exam 120 Minutes

CS221 Practice Midterm

Bayes Nets III: Inference

Probabilistic Models. Models describe how (a portion of) the world works

Introduction to Spring 2006 Artificial Intelligence Practice Final

CS 188: Artificial Intelligence. Our Status in CS188

Announcements. CS 188: Artificial Intelligence Fall Markov Models. Example: Markov Chain. Mini-Forward Algorithm. Example

CS 188: Artificial Intelligence Spring Announcements

Uncertain Knowledge and Bayes Rule. George Konidaris

CSE 473: Artificial Intelligence Autumn 2011

CS 188: Artificial Intelligence Fall 2011

CS 188: Artificial Intelligence Fall 2008

Announcements. Inference. Mid-term. Inference by Enumeration. Reminder: Alarm Network. Introduction to Artificial Intelligence. V22.

CSE 473: Artificial Intelligence Autumn Topics

Announcements. CS 188: Artificial Intelligence Fall Causality? Example: Traffic. Topology Limits Distributions. Example: Reverse Traffic

Our Status. We re done with Part I Search and Planning!

CSE 473: Artificial Intelligence

Probabilistic Reasoning

Final. CS 188 Fall Introduction to Artificial Intelligence

CS188: Artificial Intelligence, Fall 2009 Written 2: MDPs, RL, and Probability

CS 343: Artificial Intelligence

CS 188: Artificial Intelligence Spring Announcements

STA Module 4 Probability Concepts. Rev.F08 1

CS 361: Probability & Statistics

Bayesian Networks. Vibhav Gogate The University of Texas at Dallas

CS 188: Artificial Intelligence Fall 2009

Bayesian Networks. Vibhav Gogate The University of Texas at Dallas

Alpha-Beta Pruning: Algorithm and Analysis

CS 188 Introduction to AI Fall 2005 Stuart Russell Final

CMSC 474, Game Theory

CSEP 573: Artificial Intelligence

Alpha-Beta Pruning: Algorithm and Analysis

Probability Hal Daumé III. Computer Science University of Maryland CS 421: Introduction to Artificial Intelligence 27 Mar 2012

Lecture 18: Reinforcement Learning Sanjeev Arora Elad Hazan

CS 188: Artificial Intelligence

Announcements. CS 188: Artificial Intelligence Spring Classification. Today. Classification overview. Case-Based Reasoning

Midterm. Introduction to Artificial Intelligence. CS 188 Summer You have approximately 2 hours and 50 minutes.

Basic Probability and Decisions

CS 5522: Artificial Intelligence II

CS188: Artificial Intelligence, Fall 2009 Written 2: MDPs, RL, and Probability

CS 188: Artificial Intelligence Fall 2008

General Naïve Bayes. CS 188: Artificial Intelligence Fall Example: Overfitting. Example: OCR. Example: Spam Filtering. Example: Spam Filtering

Lecture 1: Probability Fundamentals

POLYNOMIAL SPACE QSAT. Games. Polynomial space cont d

Announcements. CS 188: Artificial Intelligence Spring Bayes Net Semantics. Probabilities in BNs. All Conditional Independences

Grundlagen der Künstlichen Intelligenz

CS 188: Artificial Intelligence Spring Announcements

Grades 7 & 8, Math Circles 24/25/26 October, Probability

Discrete Mathematics and Probability Theory Spring 2016 Rao and Walrand Note 14

CSE 473: Ar+ficial Intelligence. Hidden Markov Models. Bayes Nets. Two random variable at each +me step Hidden state, X i Observa+on, E i

CS 380: ARTIFICIAL INTELLIGENCE

MAT Mathematics in Today's World

Midterm Examination CS540: Introduction to Artificial Intelligence

Approximate Q-Learning. Dan Weld / University of Washington

Decision Theory Intro: Preferences and Utility

CS 361: Probability & Statistics

CS188 Outline. CS 188: Artificial Intelligence. Today. Inference in Ghostbusters. Probability. We re done with Part I: Search and Planning!

Lecture 7: DecisionTrees

CS 343: Artificial Intelligence

Q-learning Tutorial. CSC411 Geoffrey Roeder. Slides Adapted from lecture: Rich Zemel, Raquel Urtasun, Sanja Fidler, Nitish Srivastava

Decision Theory: Q-Learning

CS 188: Artificial Intelligence. Outline

CS 188 Fall Introduction to Artificial Intelligence Midterm 1

Transcription:

CS 188: Artificial Intelligence Spring 2009 Lecture 7: Expectimax Search 2/10/2009 John DeNero UC Berkeley Slides adapted from Dan Klein, Stuart Russell or Andrew Moore Announcements Written Assignment 1: Due at the end of lecture If you haven t done it, but still want some points, come talk to me after class Project 1: Most of you did very well We promise not to steal your slip days Come to office hours if you didn t finish & want help Project 2: Due a week from tomorrow (Wednesday) Want a partner? Come to the front after lecture Today Mini-contest 1 results Pruning game trees Chance in game trees Mini-Contest Winners Problem: eat all the food in bigsearch Challenge: finding a provably optimal path is very difficult Winning solutions (baseline is 30): th : Greedy hill-climbing, Jeremy Cowles: 314 4 th : Local choices, Jon Hirschberg and Nam Do: 292 3 rd : Local choices, Richard Guo and Shendy Kurnia: 290 2 nd : Local choices, Tim Swift: 286 1 st : A* with inadmissible heuristic, Nikita Mikhaylin: 284 GamesCrafters Adversarial Games Deterministic, zero-sum games: tic-tac-toe, chess, checkers One player maximizes result The other minimizes result Minimax values: computed recursively max 2 min http://gamescrafters.berkeley.edu/ Minimax search: A state-space search tree Players alternate turns Each node has a minimax value: best achievable utility against a rational adversary 8 2 6 Terminal values: part of the game 6 1

Computing Minimax Values Two recursive functions: max-value maxes the values of successors min-value mins the values of successors def value(state): If the state is a terminal state: return the state s utility If the next agent is MAX: return max-value(state) If the next agent is MIN: return min-value(state) def max-value(state): Initialize max = - For each successor of state: Compute value(successor) Update max accordingly Return max Pruning in Minimax Search [- [3,+ [3,14] [3,] [3,3],+ ] ] [- [3,3],3] [-,2] [- [2,2],14],] 3 12 8 2 14 2 8 Alpha-Beta Pruning Alpha-Beta Pseudocode General configuration a is the best value that MAX can get at any choice point along the current path If n becomes worse than a, MAX will avoid it, so can stop considering n s other children Define b similarly for MIN MAX MIN MAX MIN a n b 9 v Alpha-Beta Pruning Example Alpha-Beta Pruning Properties a = - b = + 3 3 12 2 14 1 a = - b = 3 8 a = -3 b = + 3 a = 3 2 1 b = + 8 3 a = 3 b = + 14 a is MAX s best alternative in the branch b is MIN s best alternative in the branch This pruning has no effect on final result at the root Values of intermediate nodes might be wrong! Good move ordering improves effectiveness of pruning With perfect ordering : Time complexity drops to O(b m/2 ) Doubles solvable depth Full search of, e.g. chess, is still hopeless! This is a simple example of metareasoning, and the only one you need to know in detail 13 2

Expectimax Search Trees Maximum Expected Utility What if we don t know what the result of an action will be? E.g., In solitaire, next card is unknown In monopoly, the dice are random In pacman, the ghosts act randomly We can do expectimax search Chance nodes are like min nodes, except the outcome is uncertain Calculate expected utilities Max nodes as in minimax search Chance nodes take average (expectation) of value of children Later, we ll learn how to formalize the underlying problem as a Markov Decision Process max 10 4 7 chance Why should we average utilities? Why not minimax? Principle of maximum expected utility: an agent should chose the action which maximizes its expected utility, given its knowledge General principle for decision making Often taken as the definition of rationality We ll see this idea over and over in this course! Let s decompress this definition 14 1 Reminder: Probabilities A random variable represents an event whose outcome is unknown A probability distribution is an assignment of weights to outcomes Example: traffic on freeway? Random variable: T = how much traffic is there Outcomes: T in {none, light, heavy} Distribution: P(T=none) = 0.2, P(T=light) = 0., P(T=heavy) = 0.2 Common abbreviation: P(light) = 0. Some laws of probability (more later): Probabilities are always non-negative Probabilities over all possible outcomes sum to one As we get more evidence, probabilities may change: P(T=heavy) = 0.2, P(T=heavy Hour=8am) = 0.60 We ll talk about methods for reasoning and updating probabilities later 16 What are Probabilities? Objectivist / frequentist answer: Averages over repeated experiments E.g. empirically estimating P(rain) from historical observation Assertion about how future experiments will go (in the limit) New evidence changes the reference class Makes one think of inherently random events, like rolling dice Subjectivist / Bayesian answer: Degrees of belief about unobserved variables E.g. an agent s belief that it s raining, given the temperature E.g. pacman s belief that the ghost will turn left, given the state Often learn probabilities from past experiences (more later) New evidence updates beliefs (more later) 17 Uncertainty Everywhere Not just for games of chance! I m sick: will I sneeze this minute? Email contains FREE! : is it spam? Tooth hurts: have cavity? 60 min enough to get to the airport? Robot rotated wheel three times, how far did it advance? Safe to cross street? (Look both ways!) Sources of uncertainty in random variables: Inherently random process (dice, etc) Insufficient or weak evidence Ignorance of underlying processes Unmodeled variables The world s just noisy it doesn t behave according to plan! Compare to fuzzy logic, which has degrees of truth, rather than just degrees of belief 18 Reminder: Expectations We can define function f(x) of a random variable X The expected value of a function is its average value, weighted by the probability distribution over inputs Example: How long to get to the airport? Length of driving time as a function of traffic: L(none) = 20, L(light) = 30, L(heavy) = 60 What is my expected driving time? Notation: E[ L(T) ] Remember, P(T) = {none: 0.2, light: 0., heavy: 0.2} E[ L(T) ] = L(none) * P(none) + L(light) * P(light) + L(heavy) * P(heavy) E[ L(T) ] = (20 * 0.2) + (30 * 0.) + (60 * 0.2) = 3 19 3

Expectimax Search Expectimax Pseudocode In expectimax search, we have a probabilistic model of how the opponent (or environment) will behave in any state Model could be a simple uniform distribution (roll a die) Model could be sophisticated and require a great deal of computation We have a node for every outcome out of our control: opponent or environment The model might say that adversarial actions are likely! For now, assume for any state we have a distribution to assign probabilities to opponent actions / environment outcomes Having a probabilistic belief about an agent s action does not mean that agent is flipping any coins! 22 def value(s) if s is a max node: return maxvalue(s) if s is an exp node: return expvalue(s) if s is a terminal node: return evaluation(s) def maxvalue(s) values = [value(s ) for s in successors(s)] return max(values) def expvalue(s) values = [value(s ) for s in successors(s)] weights = [probability(s, s ) for s in successors(s)] return expectation(values, weights) 8 4 6 23 Expectimax for Pacman Expectimax for Pacman Notice that we ve gotten away from thinking that the ghosts are trying to minimize pacman s score Instead, they are now a part of the environment Pacman has a belief (distribution) over how they will act Questions: Is minimax a special case of expectimax? What happens if we think ghosts move randomly, but they really do try to minimize Pacman s score? Results from playing games Minimax Pacman Expectimax Pacman Minimizing Ghost 493 Won 1/ -303 Random Ghost 483 03 24 Pacmanused depth 4 search with an eval function that avoids trouble Ghost used depth 2 search with an eval function that seeks Pacman [Demo] Expectimax Pruning? Expectimax Evaluation 26 For minimax search, evaluation function scale doesn t matter We just want better states to have higher evaluations (get the ordering right) We call this property insensitivity to monotonic transformations For expectimax, we need the magnitudes to be meaningful as well E.g. must know whether a 0% / 0% lottery between A and B is better than 100% chance of C 100 or -10 vs 0 is different than 10 or -100 vs 0 27 4

Mixed Layer Types E.g. Backgammon Expectiminimax Environment is an extra player that moves after each agent Chance nodes take expectations, otherwise like minimax Stochastic Two-Player Dice rolls increase b: 21 possible rolls with 2 dice Backgammon 20 legal moves Depth 4 = 20 x (21 x 20) 3 1.2 x 10 9 As depth increases, probability of reaching a given node shrinks So value of lookahead is diminished So limiting depth is less damaging But pruning is less possible TDGammon uses depth-2 search + very good eval function + reinforcement learning: worldchampion level play 28 29 Non-Zero-Sum Games Similar to minimax: Utilities are now tuples Each player maximizes their own entry at each node Propagate (or back up) nodes from children Can give rise to cooperation and competition dynamically 1,2,6 4,3,2 6,1,2 7,4,1,1,1 1,,2 7,7,1,4, 30