CS188 Outline. CS 188: Artificial Intelligence. Today. Inference in Ghostbusters. Probability. We re done with Part I: Search and Planning!

Similar documents
CS188 Outline. We re done with Part I: Search and Planning! Part II: Probabilistic Reasoning. Part III: Machine Learning

Our Status in CSE 5522

CS 343: Artificial Intelligence

CS 188: Artificial Intelligence Fall 2011

Our Status. We re done with Part I Search and Planning!

CS 188: Artificial Intelligence. Our Status in CS188

CSE 473: Artificial Intelligence

CS 188: Artificial Intelligence Fall 2009

Reinforcement Learning Wrap-up

CS 188: Artificial Intelligence Spring Announcements

Random Variables. A random variable is some aspect of the world about which we (may) have uncertainty

Probability Hal Daumé III. Computer Science University of Maryland CS 421: Introduction to Artificial Intelligence 27 Mar 2012

Hidden Markov Models. Vibhav Gogate The University of Texas at Dallas

CSE 473: Artificial Intelligence Probability Review à Markov Models. Outline

Outline. CSE 573: Artificial Intelligence Autumn Agent. Partial Observability. Markov Decision Process (MDP) 10/31/2012

Bayesian networks. Soleymani. CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2018

Probability, Markov models and HMMs. Vibhav Gogate The University of Texas at Dallas

PROBABILITY. Inference: constraint propagation

PROBABILITY AND INFERENCE

CS 188: Artificial Intelligence Spring Announcements

CS 188: Artificial Intelligence Spring 2009

CS 5522: Artificial Intelligence II

CS 5522: Artificial Intelligence II

CSE 473: Artificial Intelligence

CSEP 573: Artificial Intelligence

CS 188: Artificial Intelligence

CS 343: Artificial Intelligence

CS 188: Artificial Intelligence. Bayes Nets

Announcements. CS 188: Artificial Intelligence Fall Markov Models. Example: Markov Chain. Mini-Forward Algorithm. Example

CS 188: Artificial Intelligence Spring Announcements

CS 343: Artificial Intelligence

Approximate Inference

CS 188: Artificial Intelligence Fall 2009

CS 343: Artificial Intelligence

Markov Chains and Hidden Markov Models

CSEP 573: Artificial Intelligence

CS 188: Artificial Intelligence

Artificial Intelligence CS 6364

Announcements. CS 188: Artificial Intelligence Fall Causality? Example: Traffic. Topology Limits Distributions. Example: Reverse Traffic

CSE 473: Artificial Intelligence Spring 2014

CS 188: Artificial Intelligence Spring Announcements

Announcements. CS 188: Artificial Intelligence Fall VPI Example. VPI Properties. Reasoning over Time. Markov Models. Lecture 19: HMMs 11/4/2008

Markov Models. CS 188: Artificial Intelligence Fall Example. Mini-Forward Algorithm. Stationary Distributions.

Reasoning Under Uncertainty: Conditioning, Bayes Rule & the Chain Rule

CS 5522: Artificial Intelligence II

CS 343: Artificial Intelligence

CS 5522: Artificial Intelligence II

CS 5522: Artificial Intelligence II

CSE 473: Ar+ficial Intelligence. Probability Recap. Markov Models - II. Condi+onal probability. Product rule. Chain rule.

Hidden Markov Models. Hal Daumé III. Computer Science University of Maryland CS 421: Introduction to Artificial Intelligence 19 Apr 2012

CS 343: Artificial Intelligence

Uncertainty. Chapter 13

Probabilistic Robotics. Slides from Autonomous Robots (Siegwart and Nourbaksh), Chapter 5 Probabilistic Robotics (S. Thurn et al.

Bayes Nets: Sampling

CSE 473: Ar+ficial Intelligence. Hidden Markov Models. Bayes Nets. Two random variable at each +me step Hidden state, X i Observa+on, E i

Course Introduction. Probabilistic Modelling and Reasoning. Relationships between courses. Dealing with Uncertainty. Chris Williams.

CS 188: Artificial Intelligence Fall 2008

Artificial Intelligence Uncertainty

CS 5522: Artificial Intelligence II

CSE 473: Ar+ficial Intelligence

Announcements. CS 188: Artificial Intelligence Spring Bayes Net Semantics. Probabilities in BNs. All Conditional Independences

Artificial Intelligence

Markov Models and Hidden Markov Models

Announcements. Inference. Mid-term. Inference by Enumeration. Reminder: Alarm Network. Introduction to Artificial Intelligence. V22.

Bayes Nets III: Inference

Artificial Intelligence Bayes Nets: Independence

CS 188: Artificial Intelligence Spring Today

Bayes Net Representation. CS 188: Artificial Intelligence. Approximate Inference: Sampling. Variable Elimination. Sampling.

Bayesian Networks BY: MOHAMAD ALSABBAGH

Where are we in CS 440?

Product rule. Chain rule

CS 188: Artificial Intelligence Fall Recap: Inference Example

Pengju XJTU 2016

Artificial Intelligence Programming Probability

Where are we in CS 440?

Computer Science CPSC 322. Lecture 18 Marginalization, Conditioning

CS 343: Artificial Intelligence

Recap: Bayes Nets. CS 473: Artificial Intelligence Bayes Nets: Independence. Conditional Independence. Bayes Nets. Independence in a BN

Bayes Nets: Independence

Probabilistic representation and reasoning

Reasoning Under Uncertainty: Conditional Probability

Introduction to Artificial Intelligence (AI)

Machine Learning. CS Spring 2015 a Bayesian Learning (I) Uncertainty

CSE 473: Ar+ficial Intelligence. Example. Par+cle Filters for HMMs. An HMM is defined by: Ini+al distribu+on: Transi+ons: Emissions:

CS532, Winter 2010 Hidden Markov Models

An AI-ish view of Probability, Conditional Probability & Bayes Theorem

10/18/2017. An AI-ish view of Probability, Conditional Probability & Bayes Theorem. Making decisions under uncertainty.

Uncertainty and Bayesian Networks

Hidden Markov Models. George Konidaris

Probabilistic Models. Models describe how (a portion of) the world works

Announcements. CS 188: Artificial Intelligence Spring Probability recap. Outline. Bayes Nets: Big Picture. Graphical Model Notation

Reasoning Under Uncertainty: Bayesian networks intro

Uncertainty. Outline

Human-Oriented Robotics. Temporal Reasoning. Kai Arras Social Robotics Lab, University of Freiburg

CS 188: Artificial Intelligence

Review of Probability Mark Craven and David Page Computer Sciences 760.

Pengju

Chapter 13 Quantifying Uncertainty

Discrete Probability and State Estimation

Uncertainty. Introduction to Artificial Intelligence CS 151 Lecture 2 April 1, CS151, Spring 2004

Transcription:

CS188 Outline We re done with art I: Search and lanning! CS 188: Artificial Intelligence robability art II: robabilistic Reasoning Diagnosis Speech recognition Tracking objects Robot mapping Genetics Error correcting codes lots more! art III: Machine Learning Instructors: Dan Klein and ieter Abbeel --- University of California, Berkeley [These slides were created by Dan Klein and ieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188 materials are available at http://ai.berkeley.edu.] Today Inference in Ghostbusters robability Random Variables Joint and Marginal Distributions Conditional Distribution roduct Rule, Chain Rule, Bayes Rule Inference Independence You ll need all this stuff A LOT for the next few weeks, so make sure you go over it now! A ghost is in the grid somewhere Sensor readings tell how close a square is to the ghost On the ghost: red 1 or 2 away: orange 3 or 4 away: yellow 5+ away: green Sensors are noisy, but we know (Color Distance) (red 3) (orange 3) (yellow 3) (green 3) 0.05 0.15 0.5 0.3 [Demo: Ghostbuster no probability (L12D1) ]

Uncertainty Random Variables General situation: Observed variables (evidence): Agent knows certain things about the state of the world (e.g., sensor readings or symptoms) Unobserved variables: Agent needs to reason about other aspects (e.g. where an object is or what disease is present) Model: Agent knows something about how the known variables relate to the unknown variables robabilistic reasoning gives us a framework for managing our beliefs and knowledge A random variable is some aspect of the world about which we (may) have uncertainty R = Is it raining? T = Is it hot or cold? D = How long will it take to drive to work? L = Where is the ghost? We denote random variables with capital letters Like variables in a CS, random variables have domains R in {true, false} (often write as {+r, -r}) T in {hot, cold} D in [0, ) L in possible locations, maybe {(0,0), (0,1), } robability Distributions robability Distributions Associate a probability with each value Unobserved random variables have distributions Shorthand notation: Temperature: T hot 0.5 cold 0.5 Weather: sun 0.6 rain 0.1 fog 0.3 meteor 0.0 T hot 0.5 sun 0.6 cold 0.5 rain 0.1 fog 0.3 meteor 0.0 A distribution is a TABLE of probabilities of values A probability (lower case value) is a single number OK if all domain entries are unique Must have: and

Joint Distributions robabilistic Models A joint distribution over a set of random variables: specifies a real number for each assignment (or outcome): Must obey: Size of distribution if n variables with domain sizes d? For all but the smallest distributions, impractical to write out! T A probabilistic model is a joint distribution over a set of random variables robabilistic models: (Random) variables with domains Assignments are called outcomes Joint distributions: say whether assignments (outcomes) are likely Normalized: sum to 1.0 Ideally: only certain variables directly interact Constraint satisfaction problems: Variables with domains Constraints: state whether assignments are possible Ideally: only certain variables directly interact Distribution over T,W T Constraint over T,W T hot sun T hot rain F cold sun F cold rain T Events Quiz: Events An event is a set E of outcomes (+x, +y)? X Y From a joint distribution, we can calculate the probability of any event robability that it s hot AND sunny? robability that it s hot? robability that it s hot OR sunny? T (+x)? (-y OR +x)? +x +y 0.2 +x -y 0.3 -x +y 0.4 -x -y 0.1 Typically, the events we care about are partial assignments, like (T=hot)

Marginal Distributions Quiz: Marginal Distributions Marginal distributions are sub-tables which eliminate variables Marginalization (summing out): Combine collapsed rows by adding T T hot 0.5 cold 0.5 W sun 0.6 rain 0.4 X Y +x +y 0.2 +x -y 0.3 -x +y 0.4 -x -y 0.1 X +x -x Y +y -y Conditional robabilities A simple relation between joint and conditional probabilities In fact, this is taken as the definition of a conditional probability Quiz: Conditional robabilities (+x +y)? T (a) (a,b) (b) X Y +x +y 0.2 +x -y 0.3 -x +y 0.4 -x -y 0.1 (-x +y)? (-y +x)?

Conditional Distributions Normalization Trick Conditional distributions are probability distributions over some variables given fixed values of others Conditional Distributions Joint Distribution sun 0.8 rain 0.2 T T Normalization Trick Normalization Trick T SELECT the joint probabilities matching the evidence T NORMALIZE the selection (make it sum to one) T SELECT the joint probabilities matching the evidence T NORMALIZE the selection (make it sum to one) Why does this work? Sum of selection is (evidence)! ((T=c), here)

Quiz: Normalization Trick To Normalize (X Y=-y)? (Dictionary) To bring or restore to a normal condition X Y +x +y 0.2 +x -y 0.3 -x +y 0.4 -x -y 0.1 SELECT the joint probabilities matching the evidence NORMALIZE the selection (make it sum to one) rocedure: Step 1: Compute Z = sum over all entries Step 2: Divide every entry by Z Example 1 W sun 0.2 Normalize rain 0.3 Z = 0.5 W Example 2 All entries sum to ONE T hot sun 20 Normalize hot rain 5 cold sun 10 Z = 50 cold rain 15 T robabilistic Inference Inference by Enumeration robabilistic inference: compute a desired probability from other known probabilities (e.g. conditional from joint) General case: Evidence variables: Query* variable: Hidden variables: All variables We want: * Works fine with multiple query variables, too We generally compute conditional probabilities (on time no reported accidents) = 0.90 These represent the agent s beliefs given the evidence Step 1: Select the entries consistent with the evidence Step 2: Sum out H to get joint of Query and evidence Step 3: Normalize robabilities change with new evidence: (on time no accidents, 5 a.m.) = 0.95 (on time no accidents, 5 a.m., raining) = 0.80 Observing new evidence causes beliefs to be updated

Inference by Enumeration Inference by Enumeration (W)? (W winter)? (W winter, hot)? S T summer hot sun 0.30 summer hot rain 0.05 summer cold sun 0.10 summer cold rain 0.05 winter hot sun 0.10 winter hot rain 0.05 winter cold sun 0.15 winter cold rain 0.20 Obvious problems: Worst-case time complexity O(d n ) Space complexity O(d n ) to store the joint distribution The roduct Rule The roduct Rule Sometimes have conditional distributions but want the joint Example: R sun 0.8 rain 0.2 D wet sun 0.1 dry sun 0.9 wet rain 0.7 dry rain 0.3 D wet sun 0.08 dry sun 0.72 wet rain 0.14 dry rain 0.06

The Chain Rule Bayes Rule More generally, can always write any joint distribution as an incremental product of conditional distributions Why is this always true? Bayes Rule Two ways to factor a joint distribution over two variables: That s my rule! Inference with Bayes Rule Example: Diagnostic probability from causal probability: Dividing, we get: Why is this at all helpful? Lets us build one conditional from its reverse Often one conditional is tricky but the other one is simple Foundation of many systems we ll see later (e.g. ASR, MT) In the running for most important AI equation! Example: M: meningitis, S: stiff neck Note: posterior probability of meningitis still very small Note: you should still get stiff necks checked out! Why? Example givens

Quiz: Bayes Rule Ghostbusters, Revisited Given: R sun 0.8 rain 0.2 What is (W dry)? D wet sun 0.1 dry sun 0.9 wet rain 0.7 dry rain 0.3 Let s say we have two distributions: rior distribution over ghost location: (G) Let s say this is uniform Sensor reading model: (R G) Given: we know what our sensors do R = reading color measured at (1,1) E.g. (R = yellow G=(1,1)) = 0.1 We can calculate the posterior distribution (G r) over ghost locations given a reading using Bayes rule: [Demo: Ghostbuster with probability (L12D2) ] Next Time: Markov Models