CS 343: Artificial Intelligence

Similar documents
Bayes Nets: Sampling

CS 188: Artificial Intelligence. Bayes Nets

Bayes Net Representation. CS 188: Artificial Intelligence. Approximate Inference: Sampling. Variable Elimination. Sampling.

Announcements. CS 188: Artificial Intelligence Fall Causality? Example: Traffic. Topology Limits Distributions. Example: Reverse Traffic

Announcements. Inference. Mid-term. Inference by Enumeration. Reminder: Alarm Network. Introduction to Artificial Intelligence. V22.

Bayes Nets III: Inference

Approximate Inference

qfundamental Assumption qbeyond the Bayes net Chain rule conditional independence assumptions,

Intelligent Systems (AI-2)

CS 188: Artificial Intelligence Spring Announcements

Artificial Intelligence Bayes Nets: Independence

Bayes Nets: Independence

Bayesian Networks BY: MOHAMAD ALSABBAGH

Introduction to Artificial Intelligence (AI)

Announcements. CS 188: Artificial Intelligence Spring Bayes Net Semantics. Probabilities in BNs. All Conditional Independences

CS 5522: Artificial Intelligence II

CS 343: Artificial Intelligence

CS 343: Artificial Intelligence

Artificial Intelligence

Product rule. Chain rule

Informatics 2D Reasoning and Agents Semester 2,

Bayes Networks. CS540 Bryan R Gibson University of Wisconsin-Madison. Slides adapted from those used by Prof. Jerry Zhu, CS540-1

CS 188: Artificial Intelligence Spring Announcements

CS 343: Artificial Intelligence

CS 343: Artificial Intelligence

CSE 473: Artificial Intelligence Probability Review à Markov Models. Outline

CS 343: Artificial Intelligence

PROBABILISTIC REASONING SYSTEMS

Stochastic inference in Bayesian networks, Markov chain Monte Carlo methods

Sampling Rejection Sampling Importance Sampling Markov Chain Monte Carlo. Sampling Methods. Oliver Schulte - CMPT 419/726. Bishop PRML Ch.

Markov Networks.

CS188 Outline. We re done with Part I: Search and Planning! Part II: Probabilistic Reasoning. Part III: Machine Learning

Machine Learning for Data Science (CS4786) Lecture 24

Announcements. CS 188: Artificial Intelligence Spring Probability recap. Outline. Bayes Nets: Big Picture. Graphical Model Notation

CSE 473: Artificial Intelligence

Bayesian Networks. Machine Learning, Fall Slides based on material from the Russell and Norvig AI Book, Ch. 14

CSEP 573: Artificial Intelligence

Probabilistic Graphical Models and Bayesian Networks. Artificial Intelligence Bert Huang Virginia Tech

Probabilistic Models. Models describe how (a portion of) the world works

Sampling from Bayes Nets

CS 188: Artificial Intelligence Spring Announcements

Sampling Rejection Sampling Importance Sampling Markov Chain Monte Carlo. Sampling Methods. Machine Learning. Torsten Möller.

Outline. CSE 573: Artificial Intelligence Autumn Bayes Nets: Big Picture. Bayes Net Semantics. Hidden Markov Models. Example Bayes Net: Car

CS188 Outline. CS 188: Artificial Intelligence. Today. Inference in Ghostbusters. Probability. We re done with Part I: Search and Planning!

CS 188: Artificial Intelligence. Our Status in CS188

Markov Networks. l Like Bayes Nets. l Graphical model that describes joint probability distribution using tables (AKA potentials)

Hidden Markov Models. Vibhav Gogate The University of Texas at Dallas

Markov Networks. l Like Bayes Nets. l Graph model that describes joint probability distribution using tables (AKA potentials)

Stochastic Simulation

last two digits of your SID

Bayesian networks: approximate inference

CS 188: Artificial Intelligence Fall 2008

Sampling Methods. Bishop PRML Ch. 11. Alireza Ghane. Sampling Rejection Sampling Importance Sampling Markov Chain Monte Carlo

Artificial Intelligence Bayesian Networks

Our Status. We re done with Part I Search and Planning!

CS 188: Artificial Intelligence Spring 2009

Recap: Bayes Nets. CS 473: Artificial Intelligence Bayes Nets: Independence. Conditional Independence. Bayes Nets. Independence in a BN

CS 188: Artificial Intelligence Fall Recap: Inference Example

Probability. CS 3793/5233 Artificial Intelligence Probability 1

CSE 473: Ar+ficial Intelligence. Hidden Markov Models. Bayes Nets. Two random variable at each +me step Hidden state, X i Observa+on, E i

Bayesian networks. Independence. Bayesian networks. Markov conditions Inference. by enumeration rejection sampling Gibbs sampler

Outline. CSE 573: Artificial Intelligence Autumn Agent. Partial Observability. Markov Decision Process (MDP) 10/31/2012

Our Status in CSE 5522

Uncertainty and Bayesian Networks

Inference in Bayesian Networks

CS 188: Artificial Intelligence Fall 2009

CS 5522: Artificial Intelligence II

Directed Graphical Models

CS 188: Artificial Intelligence

Announcements. CS 188: Artificial Intelligence Fall Markov Models. Example: Markov Chain. Mini-Forward Algorithm. Example

CS 343: Artificial Intelligence

CSEP 573: Artificial Intelligence

Reinforcement Learning Wrap-up

EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS

Bayesian networks. Soleymani. CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2018

ˆ The exam is closed book, closed calculator, and closed notes except your one-page crib sheet.

Bayesian Inference and MCMC

Hidden Markov Models. Hal Daumé III. Computer Science University of Maryland CS 421: Introduction to Artificial Intelligence 19 Apr 2012

Probabilistic Models

CS 188: Artificial Intelligence Fall 2011

Sampling Algorithms for Probabilistic Graphical models

COMS 4771 Probabilistic Reasoning via Graphical Models. Nakul Verma

Announcements. CS 188: Artificial Intelligence Fall VPI Example. VPI Properties. Reasoning over Time. Markov Models. Lecture 19: HMMs 11/4/2008

Bayesian Networks. instructor: Matteo Pozzi. x 1. x 2. x 3 x 4. x 5. x 6. x 7. x 8. x 9. Lec : Urban Systems Modeling

Probabilistic Graphical Networks: Definitions and Basic Results

Machine Learning for Data Science (CS4786) Lecture 24

Inference in Bayesian networks

Quantifying uncertainty & Bayesian networks

Bayesian Network. Outline. Bayesian Network. Syntax Semantics Exact inference by enumeration Exact inference by variable elimination

Bayesian networks. Chapter Chapter

School of EECS Washington State University. Artificial Intelligence

The Particle Filter. PD Dr. Rudolph Triebel Computer Vision Group. Machine Learning for Computer Vision

Bayesian networks. Chapter 14, Sections 1 4

Computer Vision Group Prof. Daniel Cremers. 14. Sampling Methods

Objectives. Probabilistic Reasoning Systems. Outline. Independence. Conditional independence. Conditional independence II.

Markov Models. CS 188: Artificial Intelligence Fall Example. Mini-Forward Algorithm. Stationary Distributions.

Outline } Exact inference by enumeration } Exact inference by variable elimination } Approximate inference by stochastic simulation } Approximate infe

Soft Computing. Lecture Notes on Machine Learning. Matteo Matteucci.

CS 188: Artificial Intelligence Spring Announcements

Probabilistic Reasoning Systems

Transcription:

CS 343: Artificial Intelligence Bayes Nets: Sampling Prof. Scott Niekum The University of Texas at Austin [These slides based on those of Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188 materials are available at http://ai.berkeley.edu.]

Bayes Net Representation A directed, acyclic graph, one node per random variable A conditional probability table (CPT) for each node A collection of distributions over X, one for each combination of parents values Bayes nets implicitly encode joint distributions As a product of local conditional distributions To see what probability a BN gives to a full assignment, multiply all the relevant conditionals together:

Inference by Enumeration vs. Variable Elimination Why is inference by enumeration so slow? You join up the whole joint distribution before you sum out the hidden variables Idea: interleave joining and marginalizing! Called Variable Elimination Still NP-hard, but usually much faster than inference by enumeration

Traffic Domain R T L Inference by Enumeration Join on r Variable Elimination Join on r Join on t Eliminate r Eliminate r Join on t Eliminate t Eliminate t

General Variable Elimination Query: Start with initial factors: Local CPTs (but instantiated by evidence) While there are still hidden variables (not Q or evidence): Pick a hidden variable H Join all factors mentioning H Eliminate (sum out) H Join all remaining factors and normalize

Variable Elimination Efficiency Interleave joining and marginalizing, instead of fully joining all at once (i.e. enumeration) d k entries computed for a factor over k variables with domain sizes d Ordering of elimination of hidden variables can affect size of factors generated Worst case: running time exponential in the size of the Bayes net (NP-hard)

Approximate Inference: Sampling

Sampling Basic idea Draw N samples from a sampling distribution S Compute an approximate posterior probability Show this converges to the true probability P Why sample? Learning: get samples from a distribution you don t know Inference: getting samples can be faster than computing the right answer (e.g. with variable elimination)

Sampling Sampling from given distribution Step 1: Get sample u from uniform distribution over [0, 1) E.g. random() in python Step 2: Convert this sample u into an outcome for the given distribution by having each outcome associated with a sub-interval of [0,1) with subinterval size equal to probability of the outcome Example C P(C) red 0.6 green 0.1 blue 0.3 If random() returns u = 0.83, then our sample is C = blue E.g, after sampling 8 times:

Sampling in Bayes Nets Prior Sampling Rejection Sampling Likelihood Weighting Gibbs Sampling

Prior Sampling

Prior Sampling +c 0.5 -c 0.5 Cloudy +c +s 0.1 -s 0.9 -c +s 0.5 -s 0.5 Sprinkler Rain +c +r 0.8 -r 0.2 -c +r 0.2 -r 0.8 +s +r +w 0.99 -w 0.01 -r +w 0.90 -w 0.10 -s +r +w 0.90 -w 0.10 -r +w 0.01 -w 0.99 WetGrass Samples: +c, -s, +r, +w -c, +s, -r, +w

Prior Sampling For i=1, 2,, n Sample x i from P(X i Parents(X i )) Return (x 1, x 2,, x n )

Prior Sampling This process generates samples with probability: i.e. the BN s joint probability Let the number of samples of a particular event be and and the total number of samples of all events be N. Then I.e., the sampling procedure is consistent

Example We ll get a bunch of samples from the BN: +c, -s, +r, +w +c, +s, +r, +w -c, +s, +r, -w +c, -s, +r, +w -c, -s, -r, +w If we want to know P(W) We have counts <+w:4, -w:1> Normalize to get P(W) = <+w:0.8, -w:0.2> This will get closer to the true distribution with more samples Can estimate anything else, too What about P(C +w)? P(C +r, +w)? P(C -r, -w)? Fast: can use fewer samples if less time (what s the drawback?) S C W R

Rejection Sampling

Rejection Sampling Let s say we want P(C) No point keeping all samples around Just tally counts of C as we go Let s say we want P(C +s) Same thing: tally C outcomes, but ignore (reject) samples which don t have S=+s This is called rejection sampling It is also consistent for conditional probabilities (i.e., correct in the limit) S C W R +c, -s, +r, +w +c, +s, +r, +w -c, +s, +r, -w +c, -s, +r, +w -c, -s, -r, +w

Rejection Sampling IN: evidence instantiation For i=1, 2,, n Sample x i from P(X i Parents(X i )) If x i not consistent with evidence Reject: Return, and no sample is generated in this cycle Return (x 1, x 2,, x n )

Likelihood Weighting

Likelihood Weighting Problem with rejection sampling: If evidence is unlikely, rejects lots of samples Evidence not exploited as you sample Consider P(Shape blue) pyramid, green pyramid, red Shape Color sphere, blue cube, red sphere, green Idea: fix evidence variables and sample the rest Problem: sample distribution not consistent! Solution: weight by probability of evidence Shape given parents Color pyramid, blue pyramid, blue sphere, blue cube, blue sphere, blue

Likelihood Weighting +c 0.5 -c 0.5 +c +s 0.1 -s 0.9 -c +s 0.5 -s 0.5 Sprinkler Cloudy Rain +c +r 0.8 -r 0.2 -c +r 0.2 -r 0.8 +s +r +w 0.99 -w 0.01 -r +w 0.90 -w 0.10 -s +r +w 0.90 -w 0.10 -r +w 0.01 -w 0.99 WetGrass Samples: +c, +s, +r, +w

Likelihood Weighting IN: evidence instantiation w = 1.0 for i=1, 2,, n if X i is an evidence variable else X i = observation x i for X i Set w = w * P(x i Parents(X i )) Sample x i from P(X i Parents(X i )) return (x 1, x 2,, x n ), w

Likelihood Weighting Sampling distribution if z sampled and e fixed evidence Cloudy C Now, samples have weights S R W Together, weighted sampling distribution is consistent

Likelihood Weighting Likelihood weighting is good We have taken evidence into account as we generate the sample Our samples will reflect the state of the world suggested by the evidence Likelihood weighting doesn t solve all our problems Evidence influences the choice of downstream variables, but not upstream ones (not more likely to get a value matching the evidence) No need for rejection! Can cause many very small weights > inefficient! We would like to consider evidence when we sample every variable! Gibbs sampling

Gibbs Sampling

Gibbs Sampling Procedure: keep track of a full instantiation x 1, x 2,, x n. Start with an arbitrary instantiation consistent with the evidence. Sample one variable at a time, conditioned on all the rest, but keep evidence fixed. Keep repeating this for a long time. Property: in the limit of repeating this infinitely many times the resulting sample is coming from the correct distribution Rationale: both upstream and downstream variables condition on evidence. In contrast: likelihood weighting only conditions on upstream evidence, and hence weights obtained in likelihood weighting can sometimes be very small. Sum of weights over all samples is indicative of how many effective samples were obtained, so want high weight.

Gibbs Sampling Example: P( S +r) Step 1: Fix evidence R = +r C S +r Step 2: Initialize other variables Randomly C S +r W W Step 3: Repeat the following: Choose a non-evidence variable X Resample X from P( X all other variables) C C C C C C S +r S +r S +r S +r S +r S +r W W W W W W

Efficient Resampling of One Variable Sample from P(S +c, +r, -w) C S +r W Many things cancel out only CPTs with S remain! More generally: only CPTs that have resampled variable need to be considered, and joined together

Gibbs Sampling How is this better than sampling from the full joint? In a Bayes Net, sampling a variable given all the other variables (e.g. P(R S,C,W)) is usually much easier than sampling from the full joint distribution Only requires a join on the variable to be sampled (in this case, a join on R) The resulting factor only depends on the variable s parents, its children, and its children s parents (this is often referred to as its Markov blanket)

Further Reading on Gibbs Sampling* Gibbs sampling produces sample from the query distribution P( Q e ) in limit of re-sampling infinitely often Gibbs sampling is a special case of more general methods called Markov chain Monte Carlo (MCMC) methods Metropolis-Hastings is one of the more famous MCMC methods (in fact, Gibbs sampling is a special case of Metropolis-Hastings) You may read about Monte Carlo methods they re just sampling

Bayes Net Sampling Summary Prior Sampling P Rejection Sampling P( Q e ) Likelihood Weighting P( Q e) Gibbs Sampling P( Q e )