Outline. CSE 573: Artificial Intelligence Autumn Bayes Nets: Big Picture. Bayes Net Semantics. Hidden Markov Models. Example Bayes Net: Car

Similar documents
CSE 473: Artificial Intelligence Autumn 2011

Bayesian Networks. Vibhav Gogate The University of Texas at Dallas

Bayesian Networks. Vibhav Gogate The University of Texas at Dallas

Outline. CSE 473: Artificial Intelligence Spring Bayes Nets: Big Picture. Bayes Net Semantics. Hidden Markov Models. Example Bayes Net: Car

Probabilistic Models. Models describe how (a portion of) the world works

Announcements. CS 188: Artificial Intelligence Spring Probability recap. Outline. Bayes Nets: Big Picture. Graphical Model Notation

Bayes Nets: Independence

Artificial Intelligence Bayes Nets: Independence

CS 5522: Artificial Intelligence II

CS 188: Artificial Intelligence Spring Announcements

Product rule. Chain rule

CS 5522: Artificial Intelligence II

CS 343: Artificial Intelligence

CS 188: Artificial Intelligence Fall 2009

Bayes Nets. CS 188: Artificial Intelligence Fall Example: Alarm Network. Bayes Net Semantics. Building the (Entire) Joint. Size of a Bayes Net

Bayesian networks. Soleymani. CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2018

Probabilistic Models

Bayesian networks (1) Lirong Xia

CS 188: Artificial Intelligence Spring Announcements

CS 188: Artificial Intelligence Fall 2008

Bayesian networks. Chapter 14, Sections 1 4

Recap: Bayes Nets. CS 473: Artificial Intelligence Bayes Nets: Independence. Conditional Independence. Bayes Nets. Independence in a BN

CSE 473: Ar+ficial Intelligence. Hidden Markov Models. Bayes Nets. Two random variable at each +me step Hidden state, X i Observa+on, E i

Directed Graphical Models

Announcements. CS 188: Artificial Intelligence Fall Example Bayes Net. Bayes Nets. Example: Traffic. Bayes Net Semantics

PROBABILISTIC REASONING SYSTEMS

Outline. CSE 573: Artificial Intelligence Autumn Agent. Partial Observability. Markov Decision Process (MDP) 10/31/2012

Introduction to Artificial Intelligence. Unit # 11

CSEP 573: Artificial Intelligence

Bayesian Networks. Motivation

Bayes Nets III: Inference

Bayesian Network. Outline. Bayesian Network. Syntax Semantics Exact inference by enumeration Exact inference by variable elimination

Announcements. CS 188: Artificial Intelligence Spring Bayes Net Semantics. Probabilities in BNs. All Conditional Independences

CS 2750: Machine Learning. Bayesian Networks. Prof. Adriana Kovashka University of Pittsburgh March 14, 2016

Bayesian networks. Chapter Chapter

Announcements. CS 188: Artificial Intelligence Fall Causality? Example: Traffic. Topology Limits Distributions. Example: Reverse Traffic

Introduction to Artificial Intelligence Belief networks

Bayesian Networks BY: MOHAMAD ALSABBAGH

School of EECS Washington State University. Artificial Intelligence

last two digits of your SID

Probabilistic Reasoning. (Mostly using Bayesian Networks)

CS 188: Artificial Intelligence. Bayes Nets

Quantifying uncertainty & Bayesian networks

Probabilistic Reasoning Systems

Review: Bayesian learning and inference

Objectives. Probabilistic Reasoning Systems. Outline. Independence. Conditional independence. Conditional independence II.

Bayes Networks 6.872/HST.950

CS 343: Artificial Intelligence

CSE 473: Artificial Intelligence Probability Review à Markov Models. Outline

Graphical Models - Part I

CSE 473: Artificial Intelligence Autumn Topics

CS 380: ARTIFICIAL INTELLIGENCE UNCERTAINTY. Santiago Ontañón

Inference in Bayesian Networks

Artificial Intelligence Bayesian Networks

Announcements. Inference. Mid-term. Inference by Enumeration. Reminder: Alarm Network. Introduction to Artificial Intelligence. V22.

Probabilistic Reasoning. Kee-Eung Kim KAIST Computer Science

CS 484 Data Mining. Classification 7. Some slides are from Professor Padhraic Smyth at UC Irvine

Bayesian networks. Chapter AIMA2e Slides, Stuart Russell and Peter Norvig, Completed by Kazim Fouladi, Fall 2008 Chapter 14.

Bayesian Networks. Semantics of Bayes Nets. Example (Binary valued Variables) CSC384: Intro to Artificial Intelligence Reasoning under Uncertainty-III

Bayes Networks. CS540 Bryan R Gibson University of Wisconsin-Madison. Slides adapted from those used by Prof. Jerry Zhu, CS540-1

Bayesian Networks. Philipp Koehn. 6 April 2017

EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS

Informatics 2D Reasoning and Agents Semester 2,

Learning Bayesian Networks (part 1) Goals for the lecture

Implementing Machine Reasoning using Bayesian Network in Big Data Analytics

Inference in Bayesian networks

Probability. CS 3793/5233 Artificial Intelligence Probability 1

Introduction to Probabilistic Reasoning. Image credit: NASA. Assignment

Intelligent Systems (AI-2)

Probabilistic representation and reasoning

Bayesian networks. Chapter Chapter

Bayesian Networks. Philipp Koehn. 29 October 2015

Introduction to Bayes Nets. CS 486/686: Introduction to Artificial Intelligence Fall 2013

Bayesian networks. Chapter Chapter Outline. Syntax Semantics Parameterized distributions. Chapter

COMP5211 Lecture Note on Reasoning under Uncertainty

Bayesian Networks. Characteristics of Learning BN Models. Bayesian Learning. An Example

CSEP 573: Artificial Intelligence

COS402- Artificial Intelligence Fall Lecture 10: Bayesian Networks & Exact Inference

Probabilistic representation and reasoning

Approximate Inference

Axioms of Probability? Notation. Bayesian Networks. Bayesian Networks. Today we ll introduce Bayesian Networks.

Y. Xiang, Inference with Uncertain Knowledge 1

Quantifying Uncertainty

CS 188: Artificial Intelligence. Our Status in CS188

Probabilistic Representation and Reasoning

Directed Graphical Models or Bayesian Networks

Uncertainty and Bayesian Networks

Bayesian networks: Modeling

Which coin will I use? Which coin will I use? Logistics. Statistical Learning. Topics. 573 Topics. Coin Flip. Coin Flip

Bayesian Networks. instructor: Matteo Pozzi. x 1. x 2. x 3 x 4. x 5. x 6. x 7. x 8. x 9. Lec : Urban Systems Modeling

Probabilistic Graphical Models and Bayesian Networks. Artificial Intelligence Bert Huang Virginia Tech

Rapid Introduction to Machine Learning/ Deep Learning

CSE 473: Artificial Intelligence

Intelligent Systems: Reasoning and Recognition. Reasoning with Bayesian Networks

CS Belief networks. Chapter

Bayesian Machine Learning

Graphical Models and Kernel Methods

Markov Networks.

Artificial Intelligence Methods. Inference in Bayesian networks

Lecture 8: Bayesian Networks

Transcription:

CSE 573: Artificial Intelligence Autumn 2012 Bayesian Networks Dan Weld Many slides adapted from Dan Klein, Stuart Russell, Andrew Moore & Luke Zettlemoyer Outline Probabilistic models (and inference) Bayesian Networks (BNs) Independence in BNs Efficient Inference in BNs Learning Whirlwind, so Take CSE 515 (Statistical Methods) Ben Taskar, Spring 2013 1 Bayes Nets: Big Picture Two problems with using full joint distribution tables as our probabilistic models: Unless there are only a few variables, the joint is WAY too big to represent explicitly Hard to learn (estimate) anything empirically about more than a few variables at a time Bayes nets: a technique for describing complex joint distributions (models) using simple, local distributions (conditional probabilities) Aka graphical model We describe how variables locally interact Local interactions chain together to give global, indirect interactions Formally: Bayes Net Semantics A set of nodes, one per random variable Directed edges, forming an acyclic graph A CPT for each node CPT = Conditional Probability Table Collection of distributions over, one for each combination of parents values A Bayes Net = Topology (graph) + Local Conditional Probabilities A 1 A n Hidden Markov Models 1 2 3 4 N5 E 1 E 2 E 3 E 4 E N5 Bayes Net: Car An HMM is defined by: Initial distribution: Transitions: Emissions: 1

: Car Insurance Probabilities in BNs Bayes nets implicitly encode joint distributions As a product of local conditional distributions To see what probability a BN gives to a full assignment, multiply all the relevant conditionals together: Does this always work? Why? Not every BN can represent every joint distribution The topology enforces certain independence assumptions Compare to the exact decomposition according to the chain rule! D. Weld and D. Fox 7 : Independent Coin Flips N independent coin flips 1 2 n h 0.5 h 0.5 h 0.5 t 0.5 t 0.5 t 0.5 Conditional Independence Unconditional (absolute) independence very rare (why?) Conditional independence is our most basic and robust form of knowledge about uncertain environments: What about fire, smoke, alarm? 2 n -1 No interactions between variables : Alarm Network Variables B: Burglary A: Alarm goes off M: Mary calls J: John calls E: Earthquake! How big is joint distribution? 2 n -1 = 31 parameters : Alarm Network B P(B) +b 0.001 b 0.999 A J P(J A) +a +j 0.9 +a j 0.1 a +j 0.05 a j 0.95 Burglary John h calls Alarm Earthqk Mary calls A M P(M A) +a +m 0.7 +a m 0.3 a +m 0.01 a m 0.99 E P(E) +e 0.002 e 0.998 Only 10 params B E A P(A B,E) +b +e +a 095 0.95 +b +e a 0.05 +b e +a 0.94 +b e a 0.06 b +e +a 9 b +e a 0.71 b e +a 0.001 b e a 0.999 2

: Traffic II Let s build a graphical model Variables T: Traffic R: It rains L: Low pressure D: Roof drips B: Ballgame C: Cavity Changing Bayes Net Structure The same joint distribution can be encoded in many different Bayes nets Analysis question: given some edges, what other edges do you need to add? One answer: fully connect the graph Better answer: don t make any false conditional independence assumptions : Independence For this graph, you can fiddle with (the CPTs) all you want, but you won t be able to represent any distribution in which the flips are dependent! : Coins Extra arcs don t prevent representing independence, just allow non-independence 1 2 1 2 1 2 h 0.5 t 0.5 h 0.5 t 0.5 All distributions h 0.5 h 0.5 h 0.5 t 0.5 t 0.5 t 0.5 Adding unneeded arcs isn t wrong, it s just inefficient h h 0.5 t h 0.5 h t 0.5 t t 0.5 Topology Limits Distributions Given some graph topology G, only certain joint distributions can be encoded The graph structure guarantees certain (conditional) independences (There might be more independence) Adding arcs increases the set of distributions, but has several costs Full conditioning can encode any distribution Y Y Z Z Y Z Independence in a BN Important question about a BN: Are two nodes independent given certain evidence? If yes, can prove using algebra (tedious in general) If no, can prove with a counter example : Y Z Question: are and Z independent? Answer: no. : low pressure causes rain, which causes traffic. Knowledge about may change belief in Z, Knowledge about Z may change belief in (via Y) Addendum: they could be independent: how? 3

Causal Chains Common Parent This configuration is a causal chain Y Z : Low pressure Y: Rain Z: Traffic Another basic configuration: two effects of the same parent Are and Z independent? Y Z Is independent of Z given Y? Y: Project due : Forum busy Z: Lab full Yes! Evidence along the chain blocks the influence Common Parent Common Effect Another basic configuration: two effects of the same parent Are and Z independent? Are and Z independent given Y? Y Z Y: Project due Last configuration: two causes of one effect (v-structures) Are and Z independent? Yes: the ballgame and the rain cause traffic, but they are not correlated Still need to prove they must be (try it!) Y Z : Forum busy Z: Lab full : Raining Z: Ballgame Yes! Y: Traffic Observing the cause blocks influence between effects. Common Effect The General Case Last configuration: two causes of one effect (v-structures) Are and Z independent? Yes: the ballgame and the rain cause traffic, but they are not correlated Still need to prove they must be (try it!) Are and Z independent given Y? No: seeing traffic puts the rain and the ballgame in competition as explanation! This is backwards from the other cases Observing an effect activates influence between possible causes. Z Y : Raining Z: Ballgame Y: Traffic Any complex example can be analyzed using these three canonical cases General question: in a given BN, are two variables independent (given evidence)? Solution: analyze the graph 4

Reachability (D-Separation) Question: Are and Y conditionally independent given evidence vars {Z}? Yes, if and Y separated by Z Look for active paths from to Y No active paths = independence! A path is active if each triple is active: Causal chain A B C where B is unobserved (either direction) Common cause A B C where B is unobserved Common effect (aka v-structure) A B C where B or one of its descendents is observed All it takes to block a path is a single inactive segment Active Triples Inactive Triples : Independent? Yes R B No No T T Active Segments : Independent? Active Segments Active Segments Yes Yes No No Yes D L R T T B Variables: R: Raining T: Traffic D: Roof drips S: I m sad Questions: No Yes No T R S D Given Markov Blanket, is Independent of All Other Nodes Given Markov Blanket, is Independent of All Other Nodes MB() = Par() Childs() Par(Childs()) D. Weld and D. Fox 43 MB() = Par() Childs() Par(Childs()) D. Weld and D. Fox 44 5

Summary Bayes nets compactly encode joint distributions (JDs) Other graphical models too: factor graphs, CRFs, Guaranteed independencies of distributions can be deduced from BN graph structure D-separation gives precise conditional independence guarantees from graph alone Outline Probabilistic models (and inference) Bayesian Networks (BNs) Independence in BNs Efficient Inference in BNs Learning A Bayes net s JD may have further (conditional) independence known only from specific CPTs Inference in BNs This graphical independence representation yields efficient inference schemes We generally want to compute Marginal probability: Pr(Z), Pr(Z E) where E is (conjunctive) evidence Z: query variable(s), E: evidence variable(s) everything else: hidden variable Computations organized by network topology D. Weld and D. Fox 54 P(B J=true, M=true) Earthquake JohnCalls Alarm Burglary MaryCalls P(b j,m) = P(b,j,m,e,a) e,a P(B J=true, M=true) Variable Elimination P(b j,m) = P(b) P(e) P(a b,e)p(j a)p(m,a) e a Earthquake Burglary Alarm JohnCalls MaryCalls P(b j,m) = P(b) P(e) P(a b,e)p(j a)p(m a) e a Repeated computations Dynamic Programming 6

Reducing 3-SAT to Bayes Nets Approximate Inference in Bayes Nets Sampling based methods (Based on slides by Jack Breese and Daphne Koller) 67 Bayes Net is a generative model We can easily generate samples from the distribution represented by the Bayes net Generate one variable at a time in topological order Use the samples to compute probabilities, say P(c) or P(n c) 69 70 71 7

72 73 74 75 76 77 8

78 79 80 81 82 83 9

Rejection Sampling Sample from the prior reject if do not match the evidence Returns consistent t posterior estimates t Hopelessly expensive if P(e) is small P(e) drops exponentially with num of evidence vars 84 85 Likelihood Weighting Idea: fix evidence variables sample only non-evidence variables weight each sample by the likelihood of evidence 86 87 88 89 10

90 91 92 93 94 95 11

Likelihood Weighting Sampling probability: S(z,e) = Neither prior nor posterior Wt for a sample <z,e>: w(z, e) i Weighted Sampling probability S(z,e)w(z,e) P(zi Parents(Z i)) i P(e i Parents(Ei) 96 = i P(zi Parents(Z i)) i P(e i Parents(Ei) = P(z,e) returns consistent estimates performance degrades w/ many evidence vars a few samples get majority of the weight late occurring evidence vars don t guide sample generation 97 MCMC with Gibbs Sampling Fix the values of observed variables Set the values of all non-observed variables randomly Perform a random walk through the space of complete variable assignments. On each move: 1. Pick a variable 2. Calculate Pr(=true all other variables) 3. Set to true with that probability Repeat many times. Frequency with which any variable Y is true = its posterior probability. Converges to true posterior when frequencies stop changing significantly stable distribution, mixing 98 Given Markov Blanket, is Independent of All Other Nodes MB() = Par() Childs() Par(Childs()) D. Weld and D. Fox 99 Markov Blanket Sampling How to calculate Pr(=true all other variables)? Recall: a variable is independent of all others given it s Markov Blanket parents children other parents of children So problem becomes calculating Pr(=true MB()) Fortunately, it is easy to solve exactly P( ) P( Parents( )) P( Y Parents( Y )) Y Children( ) 100 P( ) P( Parents( )) P( Y Parents( Y )) A C B Y Children( ) P (, ABC,, ) P ( ABC,, ) PABC (,, ) PAP ( ) ( APCPB ) ( ) (, C) PAB (,, C) PAPC ( ) ( ) P ( APB ) ( C, ) PABC (,, ) P ( APB ) (, C) 101 12

102 103 Randomly set: h, b 104 Randomly set: h, g Sample H using P(H s,g,b) 105 Randomly set: ~h, g Sample H using P(H s,g,b) Suppose result is ~h 106 Randomly set: ~h, g Sample H using P(H s,g,b) Suppose result is ~h Sample G using P(G s,~h,b) 107 13

Randomly set: ~h, g Sample H using P(H s,g,b) Suppose result is ~h Sample G using P(G s,~h,b) Suppose result is g 108 Randomly set: ~h, g Sample H using P(H s,g,b) Suppose result is ~h Sample G using P(G s,~h,b) Suppose result is g Sample G using P(G s,~h,b) 109 Randomly set: ~h, g Sample H using P(H s,g,b) Suppose result is ~h Sample G using P(G s,~h,b) Suppose result is g Sample G using P(G s,~h,b) 110 Suppose result is ~g Gibbs MCMC Summary number of samples with =x P( E) = total number of samples Advantages: No samples are discarded No problem with samples of low weight Can be implemented very efficiently 10K samples @ second Disadvantages: Can get stuck if relationship between vars is deterministic Many variations devised to make MCMC more robust 111 Other inference methods Exact inference Junction tree Approximate inference Belief Propagation Variational Methods Outline Probabilistic models Bayesian Networks (BNs) Independence in BNs Efficient Inference in BNs Learning 112 14