CS 5522: Artificial Intelligence II

Similar documents
CS 343: Artificial Intelligence

CS 188: Artificial Intelligence Fall 2009

Probabilistic Models

CS 188: Artificial Intelligence Spring Announcements

Probabilistic Models. Models describe how (a portion of) the world works

CS 188: Artificial Intelligence Fall 2008

CSE 473: Artificial Intelligence Autumn 2011

Bayesian Networks. Vibhav Gogate The University of Texas at Dallas

Bayesian Networks. Vibhav Gogate The University of Texas at Dallas

CSE 473: Ar+ficial Intelligence. Hidden Markov Models. Bayes Nets. Two random variable at each +me step Hidden state, X i Observa+on, E i

Announcements. CS 188: Artificial Intelligence Spring Probability recap. Outline. Bayes Nets: Big Picture. Graphical Model Notation

Bayesian networks. Soleymani. CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2018

CS 5522: Artificial Intelligence II

Bayesian networks (1) Lirong Xia

Outline. CSE 473: Artificial Intelligence Spring Bayes Nets: Big Picture. Bayes Net Semantics. Hidden Markov Models. Example Bayes Net: Car

CS 188: Artificial Intelligence. Our Status in CS188

CS 5522: Artificial Intelligence II

Our Status. We re done with Part I Search and Planning!

CS 188: Artificial Intelligence Spring Announcements

CSE 473: Artificial Intelligence

Bayes Nets: Independence

Outline. CSE 573: Artificial Intelligence Autumn Bayes Nets: Big Picture. Bayes Net Semantics. Hidden Markov Models. Example Bayes Net: Car

Bayes Nets. CS 188: Artificial Intelligence Fall Example: Alarm Network. Bayes Net Semantics. Building the (Entire) Joint. Size of a Bayes Net

CS 188: Artificial Intelligence Fall 2009

Probability Hal Daumé III. Computer Science University of Maryland CS 421: Introduction to Artificial Intelligence 27 Mar 2012

Artificial Intelligence Bayes Nets: Independence

Product rule. Chain rule

CSE 473: Artificial Intelligence Probability Review à Markov Models. Outline

Announcements. CS 188: Artificial Intelligence Fall Causality? Example: Traffic. Topology Limits Distributions. Example: Reverse Traffic

Announcements. CS 188: Artificial Intelligence Spring Bayes Net Semantics. Probabilities in BNs. All Conditional Independences

Announcements. CS 188: Artificial Intelligence Fall Example Bayes Net. Bayes Nets. Example: Traffic. Bayes Net Semantics

Outline. CSE 573: Artificial Intelligence Autumn Agent. Partial Observability. Markov Decision Process (MDP) 10/31/2012

Quantifying uncertainty & Bayesian networks

Bayes Nets III: Inference

Review: Bayesian learning and inference

PROBABILISTIC REASONING SYSTEMS

Objectives. Probabilistic Reasoning Systems. Outline. Independence. Conditional independence. Conditional independence II.

Probabilistic Reasoning Systems

CS 188: Artificial Intelligence Fall 2011

Introduction to Artificial Intelligence. Unit # 11

Bayesian Networks. Semantics of Bayes Nets. Example (Binary valued Variables) CSC384: Intro to Artificial Intelligence Reasoning under Uncertainty-III

Uncertainty. Outline

Bayesian networks. Chapter 14, Sections 1 4

Graphical Models - Part I

Uncertainty. Introduction to Artificial Intelligence CS 151 Lecture 2 April 1, CS151, Spring 2004

Probability, Markov models and HMMs. Vibhav Gogate The University of Texas at Dallas

Probabilistic Reasoning. (Mostly using Bayesian Networks)

Informatics 2D Reasoning and Agents Semester 2,

Artificial Intelligence

CS 380: ARTIFICIAL INTELLIGENCE UNCERTAINTY. Santiago Ontañón

Directed Graphical Models

CS 2750: Machine Learning. Bayesian Networks. Prof. Adriana Kovashka University of Pittsburgh March 14, 2016

Probabilistic representation and reasoning

Probabilistic Reasoning. Kee-Eung Kim KAIST Computer Science

CS 484 Data Mining. Classification 7. Some slides are from Professor Padhraic Smyth at UC Irvine

Uncertainty. Chapter 13

Our Status in CSE 5522

CS 188: Artificial Intelligence

Artificial Intelligence Uncertainty

Uncertainty and Bayesian Networks

Bayesian Networks BY: MOHAMAD ALSABBAGH

CS188 Outline. We re done with Part I: Search and Planning! Part II: Probabilistic Reasoning. Part III: Machine Learning

This lecture. Reading. Conditional Independence Bayesian (Belief) Networks: Syntax and semantics. Chapter CS151, Spring 2004

Uncertainty. 22c:145 Artificial Intelligence. Problem of Logic Agents. Foundations of Probability. Axioms of Probability

Bayesian networks. Chapter Chapter

CMPSCI 240: Reasoning about Uncertainty

Uncertainty and Belief Networks. Introduction to Artificial Intelligence CS 151 Lecture 1 continued Ok, Lecture 2!

Probabilistic Representation and Reasoning

Bayesian Network. Outline. Bayesian Network. Syntax Semantics Exact inference by enumeration Exact inference by variable elimination

Lecture 10: Introduction to reasoning under uncertainty. Uncertainty

Recap: Bayes Nets. CS 473: Artificial Intelligence Bayes Nets: Independence. Conditional Independence. Bayes Nets. Independence in a BN

Axioms of Probability? Notation. Bayesian Networks. Bayesian Networks. Today we ll introduce Bayesian Networks.

Markov Models and Hidden Markov Models

Y. Xiang, Inference with Uncertain Knowledge 1

Uncertainty. Chapter 13

CS 188: Artificial Intelligence Spring Announcements

Quantifying Uncertainty & Probabilistic Reasoning. Abdulla AlKhenji Khaled AlEmadi Mohammed AlAnsari

CS188 Outline. CS 188: Artificial Intelligence. Today. Inference in Ghostbusters. Probability. We re done with Part I: Search and Planning!

Bayes Networks. CS540 Bryan R Gibson University of Wisconsin-Madison. Slides adapted from those used by Prof. Jerry Zhu, CS540-1

CS 343: Artificial Intelligence

CS 343: Artificial Intelligence

Bayesian Networks. Motivation

CS 188: Artificial Intelligence. Bayes Nets

An AI-ish view of Probability, Conditional Probability & Bayes Theorem

10/18/2017. An AI-ish view of Probability, Conditional Probability & Bayes Theorem. Making decisions under uncertainty.

Brief Intro. to Bayesian Networks. Extracted and modified from four lectures in Intro to AI, Spring 2008 Paul S. Rosenbloom

Events A and B are independent P(A) = P(A B) = P(A B) / P(B)

Probability. CS 3793/5233 Artificial Intelligence Probability 1

COMP9414/ 9814/ 3411: Artificial Intelligence. 14. Uncertainty. Russell & Norvig, Chapter 13. UNSW c AIMA, 2004, Alan Blair, 2012

15-780: Graduate Artificial Intelligence. Bayesian networks: Construction and inference

Approximate Inference

Stochastic Methods. 5.0 Introduction 5.1 The Elements of Counting 5.2 Elements of Probability Theory

CS 5522: Artificial Intelligence II

Learning Bayesian Networks (part 1) Goals for the lecture

13.4 INDEPENDENCE. 494 Chapter 13. Quantifying Uncertainty

Probabilistic Reasoning

Reasoning Under Uncertainty: Bayesian networks intro

Probabilistic representation and reasoning

CS 5522: Artificial Intelligence II

14 PROBABILISTIC REASONING

Transcription:

CS 5522: Artificial Intelligence II Bayes Nets Instructor: Alan Ritter Ohio State University [These slides were adapted from CS188 Intro to AI at UC Berkeley. All materials available at http://ai.berkeley.edu.]

Probabilistic Models Models describe how (a portion of) the world works Models are always simplifications May not account for every variable May not account for all interactions between variables All models are wrong; but some are useful. George E. P. Box

Probabilistic Models Models describe how (a portion of) the world works Models are always simplifications May not account for every variable May not account for all interactions between variables All models are wrong; but some are useful. George E. P. Box What do we do with probabilistic models? We (or our agents) need to reason about unknown variables, given evidence Example: explanation (diagnostic reasoning) Example: prediction (causal reasoning) Example: value of information

Independence

Independence Two variables are independent if: This says that their joint distribution factors into a product two simpler distributions Another form: We write:

Independence Two variables are independent if: This says that their joint distribution factors into a product two simpler distributions Another form: We write: Independence is a simplifying modeling assumption Empirical joint distributions: at best close to independent What could we assume for {Weather, Traffic, Cavity, Toothache}?

Example: Independence? T W P hot sun 0.4 hot rain 0.1 cold sun 0.2 cold rain 0.3

Example: Independence? T W P hot sun 0.4 hot rain 0.1 cold sun 0.2 cold rain 0.3 T P hot 0.5 cold 0.5

Example: Independence? T W P hot sun 0.4 hot rain 0.1 cold sun 0.2 cold rain 0.3 T P hot 0.5 cold 0.5 W P sun 0.6 rain 0.4

Example: Independence? T W P hot sun 0.4 hot rain 0.1 cold sun 0.2 cold rain 0.3 T P hot 0.5 cold 0.5 W P sun 0.6 rain 0.4 T W P hot sun 0.3 hot rain 0.2 cold sun 0.3 cold rain 0.2

Example: Independence N fair, independent coin flips: H 0.5 T 0.5 H 0.5 T 0.5 H 0.5 T 0.5

Conditional Independence P(Toothache, Cavity, Catch)

Conditional Independence P(Toothache, Cavity, Catch) If I have a cavity, the probability that the probe catches in it doesn't depend on whether I have a toothache: P(+catch +toothache, +cavity) = P(+catch +cavity)

Conditional Independence P(Toothache, Cavity, Catch) If I have a cavity, the probability that the probe catches in it doesn't depend on whether I have a toothache: P(+catch +toothache, +cavity) = P(+catch +cavity) The same independence holds if I don t have a cavity: P(+catch +toothache, -cavity) = P(+catch -cavity)

Conditional Independence P(Toothache, Cavity, Catch) If I have a cavity, the probability that the probe catches in it doesn't depend on whether I have a toothache: P(+catch +toothache, +cavity) = P(+catch +cavity) The same independence holds if I don t have a cavity: P(+catch +toothache, -cavity) = P(+catch -cavity) Catch is conditionally independent of Toothache given Cavity: P(Catch Toothache, Cavity) = P(Catch Cavity)

Conditional Independence P(Toothache, Cavity, Catch) If I have a cavity, the probability that the probe catches in it doesn't depend on whether I have a toothache: P(+catch +toothache, +cavity) = P(+catch +cavity) The same independence holds if I don t have a cavity: P(+catch +toothache, -cavity) = P(+catch -cavity) Catch is conditionally independent of Toothache given Cavity: P(Catch Toothache, Cavity) = P(Catch Cavity) Equivalent statements: P(Toothache Catch, Cavity) = P(Toothache Cavity) P(Toothache, Catch Cavity) = P(Toothache Cavity) P(Catch Cavity) One can be derived from the other easily

Conditional Independence Unconditional (absolute) independence very rare (why?) Conditional independence is our most basic and robust form of knowledge about uncertain environments.

Conditional Independence Unconditional (absolute) independence very rare (why?) Conditional independence is our most basic and robust form of knowledge about uncertain environments. X is conditionally independent of Y given Z

Conditional Independence Unconditional (absolute) independence very rare (why?) Conditional independence is our most basic and robust form of knowledge about uncertain environments. X is conditionally independent of Y given Z if and only if:

Conditional Independence Unconditional (absolute) independence very rare (why?) Conditional independence is our most basic and robust form of knowledge about uncertain environments. X is conditionally independent of Y given Z if and only if: or, equivalently, if and only if

Conditional Independence Unconditional (absolute) independence very rare (why?) Conditional independence is our most basic and robust form of knowledge about uncertain environments. X is conditionally independent of Y given Z if and only if: or, equivalently, if and only if

Conditional Independence What about this domain: Traffic Umbrella Raining

Conditional Independence What about this domain: Fire Smoke Alarm

Conditional Independence What about this domain: Fire Smoke Alarm

Conditional Independence and the Chain Rule Chain rule: Trivial decomposition:

Conditional Independence and the Chain Rule Chain rule: Trivial decomposition:

Conditional Independence and the Chain Rule Chain rule: Trivial decomposition: With assumption of conditional independence:

Conditional Independence and the Chain Rule Chain rule: Trivial decomposition: With assumption of conditional independence:

Conditional Independence and the Chain Rule Chain rule: Trivial decomposition: With assumption of conditional independence:

Conditional Independence and the Chain Rule Chain rule: Trivial decomposition: With assumption of conditional independence: Bayes nets / graphical models help us express conditional independence assumptions

Ghostbusters Chain Rule Each sensor depends only on where the ghost is That means, the two sensors are conditionally independent, given the ghost position T: Top square is red B: Bottom square is red G: Ghost is in the top

Ghostbusters Chain Rule Each sensor depends only on where the ghost is That means, the two sensors are conditionally independent, given the ghost position T: Top square is red B: Bottom square is red G: Ghost is in the top P(T,B,G) = P(G) P(T G) P(B G)

Ghostbusters Chain Rule Each sensor depends only on where the ghost is That means, the two sensors are conditionally independent, given the ghost position T: Top square is red B: Bottom square is red G: Ghost is in the top Givens: P( +g ) = 0.5 P( -g ) = 0.5 P( +t +g ) = 0.8 P( +t -g ) = 0.4 P( +b +g ) = 0.4 P( +b -g ) = 0.8 P(T,B,G) = P(G) P(T G) P(B G)

Ghostbusters Chain Rule Each sensor depends only on where the ghost is That means, the two sensors are conditionally independent, given the ghost position T: Top square is red B: Bottom square is red G: Ghost is in the top Givens: P( +g ) = 0.5 P( -g ) = 0.5 P( +t +g ) = 0.8 P( +t -g ) = 0.4 P( +b +g ) = 0.4 P( +b -g ) = 0.8 P(T,B,G) = P(G) P(T G) P(B G) T B G P(T,B,G) +t +b +g 0.16 +t +b -g 0.16 +t -b +g 0.24 +t -b -g 0.04 -t +b +g 0.04 -t +b -g 0.24 -t -b +g 0.06 -t -b -g 0.06

Ghostbusters Chain Rule Each sensor depends only on where the ghost is That means, the two sensors are conditionally independent, given the ghost position T: Top square is red B: Bottom square is red G: Ghost is in the top Givens: P( +g ) = 0.5 P( -g ) = 0.5 P( +t +g ) = 0.8 P( +t -g ) = 0.4 P( +b +g ) = 0.4 P( +b -g ) = 0.8 P(T,B,G) = P(G) P(T G) P(B G) T B G P(T,B,G) +t +b +g 0.16 +t +b -g 0.16 +t -b +g 0.24 +t -b -g 0.04 -t +b +g 0.04 -t +b -g 0.24 -t -b +g 0.06 -t -b -g 0.06

Ghostbusters Chain Rule Each sensor depends only on where the ghost is That means, the two sensors are conditionally independent, given the ghost position T: Top square is red B: Bottom square is red G: Ghost is in the top Givens: P( +g ) = 0.5 P( -g ) = 0.5 P( +t +g ) = 0.8 P( +t -g ) = 0.4 P( +b +g ) = 0.4 P( +b -g ) = 0.8 P(T,B,G) = P(G) P(T G) P(B G) T B G P(T,B,G) +t +b +g 0.16 +t +b -g 0.16 +t -b +g 0.24 +t -b -g 0.04 -t +b +g 0.04 -t +b -g 0.24 -t -b +g 0.06 -t -b -g 0.06

Ghostbusters Chain Rule Each sensor depends only on where the ghost is That means, the two sensors are conditionally independent, given the ghost position T: Top square is red B: Bottom square is red G: Ghost is in the top Givens: P( +g ) = 0.5 P( -g ) = 0.5 P( +t +g ) = 0.8 P( +t -g ) = 0.4 P( +b +g ) = 0.4 P( +b -g ) = 0.8 P(T,B,G) = P(G) P(T G) P(B G) T B G P(T,B,G) +t +b +g 0.16 +t +b -g 0.16 +t -b +g 0.24 +t -b -g 0.04 -t +b +g 0.04 -t +b -g 0.24 -t -b +g 0.06 -t -b -g 0.06

Bayes Nets: Big Picture

Bayes Nets: Big Picture Two problems with using full joint distribution tables as our probabilistic models: Unless there are only a few variables, the joint is WAY too big to represent explicitly Hard to learn (estimate) anything empirically about more than a few variables at a time

Bayes Nets: Big Picture Two problems with using full joint distribution tables as our probabilistic models: Unless there are only a few variables, the joint is WAY too big to represent explicitly Hard to learn (estimate) anything empirically about more than a few variables at a time Bayes nets: a technique for describing complex joint distributions (models) using simple, local distributions (conditional probabilities) More properly called graphical models We describe how variables locally interact Local interactions chain together to give global, indirect interactions For about 10 min, we ll be vague about how these interactions are specified

Example Bayes Net: Insurance

Example Bayes Net: Car

Graphical Model Notation Nodes: variables (with domains) Can be assigned (observed) or unassigned (unobserved)

Graphical Model Notation Nodes: variables (with domains) Can be assigned (observed) or unassigned (unobserved) Arcs: interactions Indicate direct influence between variables Formally: encode conditional independence (more later)

Graphical Model Notation Nodes: variables (with domains) Can be assigned (observed) or unassigned (unobserved) Arcs: interactions Indicate direct influence between variables Formally: encode conditional independence (more later) For now: imagine that arrows mean direct causation (in general, they don t!)

Example: Coin Flips N independent coin flips No interactions between variables: absolute independence

Example: Coin Flips N independent coin flips X 1 X 2 X n No interactions between variables: absolute independence

Variables: R: It rains T: There is traffic Example: Traffic

Example: Traffic Variables: R: It rains T: There is traffic Model 1: independence R T

Example: Traffic Variables: R: It rains T: There is traffic Model 1: independence R Model 2: rain causes traffic R T T

Example: Traffic Variables: R: It rains T: There is traffic Model 1: independence R Model 2: rain causes traffic R T Why is an agent using model 2 better? T

Let s build a causal graphical model! Example: Traffic II

Let s build a causal graphical model! Example: Traffic II

Let s build a causal graphical model! Variables Example: Traffic II

Let s build a causal graphical model! Variables T: Traffic Example: Traffic II

Let s build a causal graphical model! Variables T: Traffic R: It rains Example: Traffic II

Let s build a causal graphical model! Variables T: Traffic R: It rains L: Low pressure Example: Traffic II

Let s build a causal graphical model! Variables T: Traffic R: It rains L: Low pressure D: Roof drips Example: Traffic II

Let s build a causal graphical model! Variables T: Traffic R: It rains L: Low pressure D: Roof drips B: Ballgame Example: Traffic II

Let s build a causal graphical model! Variables T: Traffic R: It rains L: Low pressure D: Roof drips B: Ballgame C: Cavity Example: Traffic II

Example: Alarm Network

Variables B: Burglary A: Alarm goes off M: Mary calls J: John calls E: Earthquake! Example: Alarm Network

Bayes Net Semantics

A Bayes net = Topology (graph) + Local Conditional Probabilities Bayes Net Semantics A set of nodes, one per variable X A directed, acyclic graph A 1 A n A conditional distribution for each node A collection of distributions over X, one for each combination of parents values X CPT: conditional probability table Description of a noisy causal process

Probabilities in BNs

Probabilities in BNs Bayes nets implicitly encode joint distributions As a product of local conditional distributions To see what probability a BN gives to a full assignment, multiply all the relevant conditionals together: Example:

Probabilities in BNs Bayes nets implicitly encode joint distributions As a product of local conditional distributions To see what probability a BN gives to a full assignment, multiply all the relevant conditionals together: Example:

Probabilities in BNs Why are we guaranteed that setting results in a proper joint distribution?

Probabilities in BNs Why are we guaranteed that setting results in a proper joint distribution? Chain rule (valid for all distributions):

Probabilities in BNs Why are we guaranteed that setting results in a proper joint distribution? Chain rule (valid for all distributions): Assume conditional independences:

Probabilities in BNs Why are we guaranteed that setting results in a proper joint distribution? Chain rule (valid for all distributions): Assume conditional independences:! Consequence:

Probabilities in BNs Why are we guaranteed that setting results in a proper joint distribution? Chain rule (valid for all distributions): Assume conditional independences:! Consequence: Not every BN can represent every joint distribution The topology enforces certain conditional independencies

Example: Coin Flips X 1 X 2 X n h 0.5 t 0.5 h 0.5 t 0.5 h 0.5 t 0.5 Only distributions whose variables are absolutely independent can be represented by a Bayes net with no arcs.

Example: Traffic R +r 1/4 -r 3/4 T +r +t 3/4 -t 1/4 -r +t 1/2 -t 1/2

Example: Alarm Network Burglary Earthqk Alarm John calls Mary calls

Example: Alarm Network B P(B) +b 0.001 Burglary Earthqk E P(E) +e 0.002 -b 0.999 -e 0.998 Alarm John calls Mary calls

Example: Alarm Network B P(B) +b 0.001 Burglary Earthqk E P(E) +e 0.002 -b 0.999 -e 0.998 Alarm B E A P(A B,E) John calls Mary calls +b +e +a 0.95 +b +e -a 0.05 +b -e +a 0.94 +b -e -a 0.06 -b +e +a 0.29 -b +e -a 0.71 -b -e +a 0.001 -b -e -a 0.999

Example: Alarm Network B P(B) +b 0.001 Burglary Earthqk E P(E) +e 0.002 -b 0.999 -e 0.998 John calls A J P(J A) +a +j 0.9 +a -j 0.1 -a +j 0.05 -a -j 0.95 Alarm Mary calls A M P(M A) +a +m 0.7 +a -m 0.3 -a +m 0.01 -a -m 0.99 B E A P(A B,E) +b +e +a 0.95 +b +e -a 0.05 +b -e +a 0.94 +b -e -a 0.06 -b +e +a 0.29 -b +e -a 0.71 -b -e +a 0.001 -b -e -a 0.999

Example: Traffic Causal direction R T +r 1/4 -r 3/4 +r +t 3/4 -t 1/4 -r +t 1/2 -t 1/2 +r +t 3/16 +r -t 1/16 -r +t 6/16 -r -t 6/16

Reverse causality? Example: Reverse Traffic

Example: Reverse Traffic Reverse causality? T R

Example: Reverse Traffic Reverse causality? T +t 9/16 -t 7/16 R +t +r 1/3 -r 2/3 -t +r 1/7 -r 6/7

Example: Reverse Traffic Reverse causality? T R +t 9/16 -t 7/16 +t +r 1/3 -r 2/3 -t +r 1/7 -r 6/7 +r +t 3/16 +r -t 1/16 -r +t 6/16 -r -t 6/16

Causality? When Bayes nets reflect the true causal patterns: Often simpler (nodes have fewer parents) Often easier to think about Often easier to elicit from experts BNs need not actually be causal Sometimes no causal net exists over the domain (especially if variables are missing) E.g. consider the variables Traffic and Drips End up with arrows that reflect correlation, not causation What do the arrows really mean? Topology may happen to encode causal structure Topology really encodes conditional independence

Bayes Nets So far: how a Bayes net encodes a joint distribution Next: how to answer queries about that distribution Today: First assembled BNs using an intuitive notion of conditional independence as causality Then saw that key property is conditional independence Main goal: answer queries about conditional independence and influence After that: how to answer numerical queries (inference)