Bayesian Networks. Vibhav Gogate The University of Texas at Dallas

Similar documents
Bayesian Networks. Vibhav Gogate The University of Texas at Dallas

CSE 473: Artificial Intelligence Autumn 2011

Probabilistic Models. Models describe how (a portion of) the world works

Outline. CSE 473: Artificial Intelligence Spring Bayes Nets: Big Picture. Bayes Net Semantics. Hidden Markov Models. Example Bayes Net: Car

CS 5522: Artificial Intelligence II

CS 188: Artificial Intelligence Spring Announcements

CS 343: Artificial Intelligence

CS 188: Artificial Intelligence Fall 2009

Probabilistic Models

Announcements. CS 188: Artificial Intelligence Spring Probability recap. Outline. Bayes Nets: Big Picture. Graphical Model Notation

Outline. CSE 573: Artificial Intelligence Autumn Bayes Nets: Big Picture. Bayes Net Semantics. Hidden Markov Models. Example Bayes Net: Car

Bayes Nets: Independence

CS 188: Artificial Intelligence Fall 2008

CS 5522: Artificial Intelligence II

Bayesian networks. Soleymani. CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2018

Product rule. Chain rule

Bayesian networks (1) Lirong Xia

Artificial Intelligence Bayes Nets: Independence

CSE 473: Ar+ficial Intelligence. Hidden Markov Models. Bayes Nets. Two random variable at each +me step Hidden state, X i Observa+on, E i

Bayes Nets. CS 188: Artificial Intelligence Fall Example: Alarm Network. Bayes Net Semantics. Building the (Entire) Joint. Size of a Bayes Net

CS 188: Artificial Intelligence Spring Announcements

Recap: Bayes Nets. CS 473: Artificial Intelligence Bayes Nets: Independence. Conditional Independence. Bayes Nets. Independence in a BN

CS 188: Artificial Intelligence. Our Status in CS188

Announcements. CS 188: Artificial Intelligence Fall Example Bayes Net. Bayes Nets. Example: Traffic. Bayes Net Semantics

Our Status. We re done with Part I Search and Planning!

CS 5522: Artificial Intelligence II

CSE 473: Artificial Intelligence

Probability Hal Daumé III. Computer Science University of Maryland CS 421: Introduction to Artificial Intelligence 27 Mar 2012

Outline. CSE 573: Artificial Intelligence Autumn Agent. Partial Observability. Markov Decision Process (MDP) 10/31/2012

CS 188: Artificial Intelligence Fall 2009

Announcements. CS 188: Artificial Intelligence Spring Bayes Net Semantics. Probabilities in BNs. All Conditional Independences

CSE 473: Artificial Intelligence Probability Review à Markov Models. Outline

Probability, Markov models and HMMs. Vibhav Gogate The University of Texas at Dallas

Objectives. Probabilistic Reasoning Systems. Outline. Independence. Conditional independence. Conditional independence II.

Probabilistic Reasoning Systems

Quantifying uncertainty & Bayesian networks

CSEP 573: Artificial Intelligence

Review: Bayesian learning and inference

Bayesian networks. Chapter 14, Sections 1 4

Hidden Markov Models. Vibhav Gogate The University of Texas at Dallas

Bayes Nets III: Inference

Announcements. CS 188: Artificial Intelligence Fall Causality? Example: Traffic. Topology Limits Distributions. Example: Reverse Traffic

Graphical Models - Part I

PROBABILISTIC REASONING SYSTEMS

Probabilistic Reasoning. Kee-Eung Kim KAIST Computer Science

Directed Graphical Models

Bayesian networks. Chapter Chapter

Bayesian Networks. Semantics of Bayes Nets. Example (Binary valued Variables) CSC384: Intro to Artificial Intelligence Reasoning under Uncertainty-III

Introduction to Artificial Intelligence. Unit # 11

Bayesian Network. Outline. Bayesian Network. Syntax Semantics Exact inference by enumeration Exact inference by variable elimination

Informatics 2D Reasoning and Agents Semester 2,

CS 2750: Machine Learning. Bayesian Networks. Prof. Adriana Kovashka University of Pittsburgh March 14, 2016

CS 484 Data Mining. Classification 7. Some slides are from Professor Padhraic Smyth at UC Irvine

CS 380: ARTIFICIAL INTELLIGENCE UNCERTAINTY. Santiago Ontañón

Probabilistic Reasoning. (Mostly using Bayesian Networks)

Quantifying Uncertainty & Probabilistic Reasoning. Abdulla AlKhenji Khaled AlEmadi Mohammed AlAnsari

CMPSCI 240: Reasoning about Uncertainty

Probabilistic representation and reasoning

CS 188: Artificial Intelligence. Bayes Nets

CSEP 573: Artificial Intelligence

Bayesian networks. Chapter AIMA2e Slides, Stuart Russell and Peter Norvig, Completed by Kazim Fouladi, Fall 2008 Chapter 14.

Uncertainty and Bayesian Networks

Bayesian Networks BY: MOHAMAD ALSABBAGH

Probability. CS 3793/5233 Artificial Intelligence Probability 1

Uncertainty. Outline

Uncertainty. Introduction to Artificial Intelligence CS 151 Lecture 2 April 1, CS151, Spring 2004

Artificial Intelligence

Events A and B are independent P(A) = P(A B) = P(A B) / P(B)

CS 188: Artificial Intelligence Fall 2011

Introduction to Artificial Intelligence Belief networks

CS 188: Artificial Intelligence

Uncertainty. 22c:145 Artificial Intelligence. Problem of Logic Agents. Foundations of Probability. Axioms of Probability

Brief Intro. to Bayesian Networks. Extracted and modified from four lectures in Intro to AI, Spring 2008 Paul S. Rosenbloom

Bayesian networks. Chapter Chapter

Axioms of Probability? Notation. Bayesian Networks. Bayesian Networks. Today we ll introduce Bayesian Networks.

CS Belief networks. Chapter

This lecture. Reading. Conditional Independence Bayesian (Belief) Networks: Syntax and semantics. Chapter CS151, Spring 2004

Bayesian networks. Chapter Chapter Outline. Syntax Semantics Parameterized distributions. Chapter

Bayesian Networks. Motivation

Bayesian Belief Network

Outline } Conditional independence } Bayesian networks: syntax and semantics } Exact inference } Approximate inference AIMA Slides cstuart Russell and

Uncertainty and Belief Networks. Introduction to Artificial Intelligence CS 151 Lecture 1 continued Ok, Lecture 2!

Probabilistic Representation and Reasoning

Lecture 8. Probabilistic Reasoning CS 486/686 May 25, 2006

Y. Xiang, Inference with Uncertain Knowledge 1

Bayesian networks: Modeling

p(x) p(x Z) = y p(y X, Z) = αp(x Y, Z)p(Y Z)

COMP5211 Lecture Note on Reasoning under Uncertainty

Probabilistic Models Bayesian Networks Markov Random Fields Inference. Graphical Models. Foundations of Data Analysis

Lecture 10: Introduction to reasoning under uncertainty. Uncertainty

Probabilistic representation and reasoning

Bayesian Networks. Material used

Implementing Machine Reasoning using Bayesian Network in Big Data Analytics

Approximate Inference

Artificial Intelligence Uncertainty

Uncertainty. Chapter 13

Reasoning Under Uncertainty

Probabilistic Classification

CS 188: Artificial Intelligence Spring Announcements

14 PROBABILISTIC REASONING

Transcription:

Bayesian Networks Vibhav Gogate The University of Texas at Dallas Intro to AI (CS 4365) Many slides over the course adapted from either Dan Klein, Luke Zettlemoyer, Stuart Russell or Andrew Moore 1

Outline Probabilistic models (and inference) Bayesian Networks (BNs) Independence in BNs

Bayes Nets: Big Picture Two problems with using full joint distribution tables as our probabilistic models: Unless there are only a few variables, the joint is WAY too big to represent explicitly Hard to learn (estimate) anything empirically about more than a few variables at a time Bayes nets: a technique for describing complex joint distributions (models) using simple, local distributions (conditional probabilities) More properly called graphical models We describe how variables locally interact Local interactions chain together to give global, indirect interactions

Bayes Net Semantics Let s formalize the semantics of a Bayes net A set of nodes, one per variable X A 1 A n A directed, acyclic graph A conditional distribution for each node A collection of distributions over X, one for each combination of parents values X CPT: conditional probability table A Bayes net = Topology (graph) + Local Conditional Probabilities

Example Bayes Net: Car

Probabilities in BNs Bayes nets implicitly encode joint distributions As a product of local conditional distributions To see what probability a BN gives to a full assignment, multiply all the relevant conditionals together: This lets us reconstruct any entry of the full joint Not every BN can represent every joint distribution The topology enforces certain independence assumptions Compare to the exact decomposition according to the chain rule!

Example Bayes Net: Insurance

Example: Independence N fair, independent coin flips: h 0.5 t 0.5 h 0.5 t 0.5 h 0.5 t 0.5

Example: Coin Flips N independent coin flips X 1 X 2 X n No interactions between variables: absolute independence

Independence Two variables are independent if: This says that their joint distribution factors into a product two simpler distributions Another form: We write:

Independence Two variables are independent if: This says that their joint distribution factors into a product two simpler distributions Another form: We write: Independence is a simplifying modeling assumption Empirical joint distributions: at best close to independent What could we assume for {Weather, Traffic, Cavity, Toothache}?

Example: Independence? T W P warm sun 0.4 warm rain 0.1 cold sun 0.2 cold rain 0.3 T W P warm sun 0.3 warm rain 0.2 cold sun 0.3 cold rain 0.2

Example: Independence? T W P warm sun 0.4 warm rain 0.1 cold sun 0.2 cold rain 0.3 T P warm 0.5 cold 0.5 T W P warm sun 0.3 warm rain 0.2 cold sun 0.3 cold rain 0.2

Example: Independence? T W P warm sun 0.4 warm rain 0.1 cold sun 0.2 cold rain 0.3 T P warm 0.5 cold 0.5 W P sun 0.6 rain 0.4 T W P warm sun 0.3 warm rain 0.2 cold sun 0.3 cold rain 0.2

Example: Independence? T W P warm sun 0.4 warm rain 0.1 cold sun 0.2 cold rain 0.3 T P warm 0.5 cold 0.5 W P sun 0.6 rain 0.4 T W P warm sun 0.3 warm rain 0.2 cold sun 0.3 cold rain 0.2

Conditional Independence P(Toothache, Cavity, Catch) If I have a cavity, the probability that the probe catches in it doesn't depend on whether I have a toothache: P(+catch +toothache, +cavity) = P(+catch +cavity) The same independence holds if I don t have a cavity: P(+catch +toothache, cavity) = P(+catch cavity) Catch is conditionally independent of Toothache given Cavity: P(Catch Toothache, Cavity) = P(Catch Cavity) Equivalent statements: P(Toothache Catch, Cavity) = P(Toothache Cavity) P(Toothache, Catch Cavity) = P(Toothache Cavity) P(Catch Cavity) One can be derived from the other easily

Conditional Independence Unconditional (absolute) independence very rare (why?) Conditional independence is our most basic and robust form of knowledge about uncertain environments: What about this domain: Traffic Umbrella Raining What about fire, smoke, alarm?

Ghostbusters Chain Rule 2-position maze, each sensor indicates ghost location T: Top square is red B: Bottom square is red G: Ghost is in the top T B G P(T,B, G) +t +b +g 0.16 +t +b g 0.16 +t b +g 0.24 +t b g 0.04 t +b +g 0.04 t +b g 0.24 t b +g 0.06 t b g 0.06

Ghostbusters Chain Rule 2-position maze, each sensor indicates ghost location T: Top square is red B: Bottom square is red G: Ghost is in the top That means, the two sensors are conditionally independent, given the ghost position Can assume: P( +g ) = 0.5 P( +t +g ) = 0.8 P( +t g ) = 0.4 P( +b +g ) = 0.4 P( +b g ) = 0.8 P(T,B,G) = P(G) P(T G) P(B G) T B G P(T,B, G) +t +b +g 0.16 +t +b g 0.16 +t b +g 0.24 +t b g 0.04 t +b +g 0.04 t +b g 0.24 t b +g 0.06 t b g 0.06

Example: Traffic Variables: R: It rains T: There is traffic

Example: Traffic Variables: R: It rains T: There is traffic Model 1: independence

Example: Traffic Variables: R: It rains T: There is traffic Model 1: independence Model 2: rain is conditioned on traffic Why is an agent using model 2 better?

Example: Traffic Variables: R: It rains T: There is traffic Model 1: independence Model 2: rain is conditioned on traffic Why is an agent using model 2 better? Model 3: traffic is conditioned on rain Is this better than model 2?

Example: Alarm Network Variables B: Burglary A: Alarm goes off M: Mary calls J: John calls E: Earthquake!

Example: Alarm Network B P(B) +b 0.001 b 0.999 A J P(J A) +a +j 0.9 +a j 0.1 a +j 0.05 a j 0.95 Burglary John calls Alarm Earthqk Mary calls A M P(M A) +a +m 0.7 +a m 0.3 a +m 0.01 E P(E) +e 0.002 e 0.998 B E A P(A B,E) +b +e +a 0.95 +b +e a 0.05 +b e +a 0.94 +b e a 0.06 b +e +a 0.29 b +e a 0.71 b e +a 0.001 b e a 0.999

Changing Bayes Net Structure The same joint distribution can be encoded in many different Bayes nets Analysis question: given some edges, what other edges do you need to add? One answer: fully connect the graph Better answer: don t make any false conditional independence assumptions

Example: Independence For this graph, you can fiddle with (the CPTs) all you want, but you won t be able to represent any distribution in which the flips are dependent! X 1 X 2 h 0.5 t 0.5 h 0.5 t 0.5 All distributions

Example: Coins Extra arcs don t prevent representing independence, just allow non-independence X 1 X 2 X 1 X 2 h 0.5 t 0.5 h 0.5 t 0.5

Example: Coins Extra arcs don t prevent representing independence, just allow non-independence X 1 X 2 X 1 X 2 h 0.5 t 0.5 h 0.5 t 0.5

Example: Coins Extra arcs don t prevent representing independence, just allow non-independence X 1 X 2 X 1 X 2 h 0.5 t 0.5 h 0.5 t 0.5 h h 0.5 t h 0.5

Example: Coins Extra arcs don t prevent representing independence, just allow non-independence X 1 X 2 X 1 X 2 h 0.5 t 0.5 h 0.5 t 0.5 h h 0.5 t h 0.5

Example: Coins Extra arcs don t prevent representing independence, just allow non-independence X 1 X 2 X 1 X 2 h 0.5 t 0.5 h 0.5 t 0.5 h h 0.5 t h 0.5 h t 0.5 t t 0.5

Example: Coins Extra arcs don t prevent representing independence, just allow non-independence X 1 X 2 X 1 X 2 h 0.5 h 0.5 t 0.5 t 0.5 Adding unneeded arcs isn t wrong, it s just inefficient h h 0.5 t h 0.5 h t 0.5 t t 0.5

Example: Coins Extra arcs don t prevent representing independence, just allow non-independence X 1 X 2 X 1 X 2 h 0.5 h 0.5 t 0.5 t 0.5 Adding unneeded arcs isn t wrong, it s just inefficient h 0.5 t 0.5 h h 0.5 t h 0.5 h t 0.5 t t 0.5

Topology Limits Distributions Given some graph topology G, only certain joint distributions can be encoded X Y Z Y The graph structure guarantees certain (conditional) independences X Z (There might be more independence) Adding arcs increases the set of distributions, but has several costs Full conditioning can encode any distribution X Y Z

Independence in a BN Important question about a BN: Are two nodes independent given certain evidence? If yes, can prove using algebra (tedious in general) If no, can prove with a counter example

Independence in a BN Important question about a BN: Are two nodes independent given certain evidence? If yes, can prove using algebra (tedious in general) If no, can prove with a counter example Example: X Y Z Question: are X and Z independent?

Independence in a BN Important question about a BN: Are two nodes independent given certain evidence? If yes, can prove using algebra (tedious in general) If no, can prove with a counter example Example: X Y Z Question: are X and Z independent? Answer: no. Example: low pressure causes rain, which causes traffic.

Independence in a BN Important question about a BN: Are two nodes independent given certain evidence? If yes, can prove using algebra (tedious in general) If no, can prove with a counter example Example: X Y Z Question: are X and Z independent? Answer: no. Example: low pressure causes rain, which causes traffic. Knowledge about X may change belief in Z, Knowledge about Z may change belief in X (via Y)

Independence in a BN Important question about a BN: Are two nodes independent given certain evidence? If yes, can prove using algebra (tedious in general) If no, can prove with a counter example Example: X Y Z Question: are X and Z independent? Answer: no. Example: low pressure causes rain, which causes traffic. Knowledge about X may change belief in Z, Knowledge about Z may change belief in X (via Y) Addendum: they could be independent: how?

Causal Chains This configuration is a causal chain X Y Z X: Low pressure Y: Rain Z: Traffic Is X independent of Z given Y?

Causal Chains This configuration is a causal chain X Y Z X: Low pressure Y: Rain Z: Traffic Is X independent of Z given Y? Yes!

Causal Chains This configuration is a causal chain X Y Z X: Low pressure Y: Rain Z: Traffic Is X independent of Z given Y? Yes! Evidence along the chain blocks the influence

Common Parent Another basic configuration: two effects of the same parent Are X and Z independent? X Y Z Y: Project due X: Forum busy Z: Lab full

Common Parent Another basic configuration: two effects of the same parent Are X and Z independent? Are X and Z independent given Y? X Y Z Y: Project due X: Forum busy Z: Lab full

Common Parent Another basic configuration: two effects of the same parent Are X and Z independent? Are X and Z independent given Y? X Y Z Y: Project due X: Forum busy Z: Lab full Yes!

Common Parent Another basic configuration: two effects of the same parent Are X and Z independent? Are X and Z independent given Y? X Y Z Y: Project due X: Forum busy Z: Lab full Yes! Observing the cause blocks influence between effects.

Common Effect Last configuration: two causes of one effect (v-structures) Are X and Z independent? X Z Y X: Raining Z: Ballgame Y: Traffic

Common Effect Last configuration: two causes of one effect (v-structures) Are X and Z independent? Yes: the ballgame and the rain cause traffic, but they are not correlated Still need to prove they must be (try it!) X Y Z X: Raining Z: Ballgame Y: Traffic

Common Effect Last configuration: two causes of one effect (v-structures) Are X and Z independent? Yes: the ballgame and the rain cause traffic, but they are not correlated Still need to prove they must be (try it!) Are X and Z independent given Y? X Z Y X: Raining Z: Ballgame Y: Traffic

Common Effect Last configuration: two causes of one effect (v-structures) Are X and Z independent? Yes: the ballgame and the rain cause traffic, but they are not correlated Still need to prove they must be (try it!) Are X and Z independent given Y? No: seeing traffic puts the rain and the ballgame in competition as explanation! This is backwards from the other cases Observing an effect activates influence between possible causes. X Z Y X: Raining Z: Ballgame Y: Traffic

The General Case Any complex example can be analyzed using these three canonical cases General question: in a given BN, are two variables independent (given evidence)? Solution: analyze the graph

Reachability (D-Separation) Question: Are X and Y conditionally independent given evidence vars {Z}? Yes, if X and Y separated by Z Look for active paths from X to Y No active paths = independence! A path is active if each triple is active: Causal chain A B C where B is unobserved (either direction) Common cause A B C where B is unobserved Common effect (aka v-structure) A B C where B or one of its descendents is observed All it takes to block a path is a single inactive segment Active Triples Inactive Triples

A much simpler D-separation test!! Check whether X and Y are disconnected in a new undirected graph G obtained from the Bayesian network using the following steps: Until no nodes can be deleted do Delete any leaf node W from DAG G as long as W is not in X, or Y or Z. Delete all edges outgoing from nodes in Z Remove directionality (Not given in your book)

D-Separation

D-separation

Example: Independent? R B T T

Example: Independent? Yes R B T T

Example: Independent? L R B D T T

Example: Independent? L Yes R B D T T

Example: Independent? L Yes Yes R B D T T

Example: Independent? L Yes Yes R B D T Yes T

Example Variables: R: Raining T: Traffic D: Roof drips S: I m sad T R D Questions: S

Example Variables: R: Raining T: Traffic D: Roof drips S: I m sad T R D Questions: S Yes

Summary Bayes nets compactly encode joint distributions Guaranteed independencies of distributions can be deduced from BN graph structure D-separation gives precise conditional independence guarantees from graph alone A Bayes net s joint distribution may have further (conditional) independence that is not detectable until you inspect its specific distribution