Artificial Intelligence: Reasoning Under Uncertainty/Bayes Nets

Similar documents
Bayesian Learning. Reading: Tom Mitchell, Generative and discriminative classifiers: Naive Bayes and logistic regression, Sections 1-2.

The Naïve Bayes Classifier. Machine Learning Fall 2017

COMP 328: Machine Learning

Bayesian Networks BY: MOHAMAD ALSABBAGH

Lecture 9: Bayesian Learning

Bayesian Learning Features of Bayesian learning methods:

Recall from last time: Conditional probabilities. Lecture 2: Belief (Bayesian) networks. Bayes ball. Example (continued) Example: Inference problem

Introduction to ML. Two examples of Learners: Naïve Bayesian Classifiers Decision Trees

Introduction to Machine Learning

CSCE 478/878 Lecture 6: Bayesian Learning

Introduction to Machine Learning

CS 343: Artificial Intelligence

Bayesian Methods in Artificial Intelligence

Machine Learning. CS Spring 2015 a Bayesian Learning (I) Uncertainty

Naïve Bayes classification

Introduction to Bayesian Learning

Naïve Bayes Classifiers

Bayesian Learning. Bayesian Learning Criteria

CSCE 478/878 Lecture 6: Bayesian Learning and Graphical Models. Stephen Scott. Introduction. Outline. Bayes Theorem. Formulas

Artificial Intelligence Bayesian Networks

PROBABILISTIC REASONING SYSTEMS

Bayesian Classification. Bayesian Classification: Why?

Approximate Inference

Stephen Scott.

UVA CS / Introduc8on to Machine Learning and Data Mining

Naïve Bayes classification. p ij 11/15/16. Probability theory. Probability theory. Probability theory. X P (X = x i )=1 i. Marginal Probability

Introduction to Bayes Nets. CS 486/686: Introduction to Artificial Intelligence Fall 2013

Bayesian Learning. Artificial Intelligence Programming. 15-0: Learning vs. Deduction

Probabilistic Machine Learning

Lecture 10: Introduction to reasoning under uncertainty. Uncertainty

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2016

Bayesian Networks Inference with Probabilistic Graphical Models

Building Bayesian Networks. Lecture3: Building BN p.1

Bayesian Learning. Examples. Conditional Probability. Two Roles for Bayesian Methods. Prior Probability and Random Variables. The Chain Rule P (B)

Consider an experiment that may have different outcomes. We are interested to know what is the probability of a particular set of outcomes.

Bayes Networks. CS540 Bryan R Gibson University of Wisconsin-Madison. Slides adapted from those used by Prof. Jerry Zhu, CS540-1

COMP90051 Statistical Machine Learning

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2014

Notes on Machine Learning for and

Bayesian Inference and MCMC

Introduction to Artificial Intelligence. Unit # 11

A.I. in health informatics lecture 3 clinical reasoning & probabilistic inference, II *

CS 188: Artificial Intelligence. Bayes Nets

Introduction: MLE, MAP, Bayesian reasoning (28/8/13)

an introduction to bayesian inference

Introduction to Artificial Intelligence (AI)

Machine Learning for Data Science (CS4786) Lecture 24

COS402- Artificial Intelligence Fall Lecture 10: Bayesian Networks & Exact Inference

Outline. CSE 573: Artificial Intelligence Autumn Agent. Partial Observability. Markov Decision Process (MDP) 10/31/2012

Probability. CS 3793/5233 Artificial Intelligence Probability 1

Lecture 8: Bayesian Networks

Soft Computing. Lecture Notes on Machine Learning. Matteo Mattecci.

Sampling from Bayes Nets

Implementing Machine Reasoning using Bayesian Network in Big Data Analytics

Mining Classification Knowledge

Directed and Undirected Graphical Models

Probabilistic Classification

9/12/17. Types of learning. Modeling data. Supervised learning: Classification. Supervised learning: Regression. Unsupervised learning: Clustering

Machine Learning

Stochastic inference in Bayesian networks, Markov chain Monte Carlo methods

Course Introduction. Probabilistic Modelling and Reasoning. Relationships between courses. Dealing with Uncertainty. Chris Williams.

Artificial Intelligence. Topic

Bayesian Learning. Two Roles for Bayesian Methods. Bayes Theorem. Choosing Hypotheses

Probability Based Learning

CS 188: Artificial Intelligence. Our Status in CS188

CS6220: DATA MINING TECHNIQUES

The Particle Filter. PD Dr. Rudolph Triebel Computer Vision Group. Machine Learning for Computer Vision

Hidden Markov Models. Vibhav Gogate The University of Texas at Dallas

Probabilistic Reasoning. (Mostly using Bayesian Networks)

Based on slides by Richard Zemel

Bayesian Networks. Motivation

10-701/ Machine Learning: Assignment 1

Discrete Mathematics and Probability Theory Spring 2014 Anant Sahai Note 10

Announcements. CS 188: Artificial Intelligence Fall Causality? Example: Traffic. Topology Limits Distributions. Example: Reverse Traffic

Graphical Models and Kernel Methods

27 : Distributed Monte Carlo Markov Chain. 1 Recap of MCMC and Naive Parallel Gibbs Sampling

Confusion matrix. a = true positives b = false negatives c = false positives d = true negatives 1. F-measure combines Recall and Precision:

HMM part 1. Dr Philip Jackson

CS 446 Machine Learning Fall 2016 Nov 01, Bayesian Learning

Bayesian Learning (II)

Logistics. Naïve Bayes & Expectation Maximization. 573 Schedule. Coming Soon. Estimation Models. Topics

CSEP 573: Artificial Intelligence

Computer Vision Group Prof. Daniel Cremers. 10a. Markov Chain Monte Carlo

Intelligent Systems (AI-2)

Undirected Graphical Models

Bayes Nets: Sampling

AST 418/518 Instrumentation and Statistics

Probabilistic Graphical Networks: Definitions and Basic Results

CSE 473: Artificial Intelligence Autumn Topics

Naïve Bayes Classifiers and Logistic Regression. Doug Downey Northwestern EECS 349 Winter 2014

1 Probabilities. 1.1 Basics 1 PROBABILITIES

CSE 473: Artificial Intelligence Probability Review à Markov Models. Outline

T Machine Learning: Basic Principles

Topics. Bayesian Learning. What is Bayesian Learning? Objectives for Bayesian Learning

Answers and expectations

Directed Graphical Models or Bayesian Networks

Computer Vision Group Prof. Daniel Cremers. 14. Sampling Methods

COMP61011! Probabilistic Classifiers! Part 1, Bayes Theorem!

PROBABILITY AND INFERENCE

EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS

Transcription:

Artificial Intelligence: Reasoning Under Uncertainty/Bayes Nets

Bayesian Learning

Conditional Probability Probability of an event given the occurrence of some other event. P( X Y) P( X Y) P( Y) P( X, Y) P( Y)

Example You ve been keeping track of the last 1000 emails you received. You find that 100 of them are spam. You also find that 200 of them were put in your junk folder, of which 90 were spam.

Example You ve been keeping track of the last 1000 emails you received. You find that 100 of them are spam. You also find that 200 of them were put in your junk folder, of which 90 were spam. What is the probability an email you receive is spam?

Example You ve been keeping track of the last 1000 emails you received. You find that 100 of them are spam. You also find that 200 of them were put in your junk folder, of which 90 were spam. What is the probability an email you receive is spam? P(X) =100 /1000 =.1

Example You ve been keeping track of the last 1000 emails you received. You find that 100 of them are spam. You also find that 200 of them were put in your junk folder, of which 90 were spam. What is the probability an email you receive is spam? P(X) =100 /1000 =.1 What is the probability an email you receive is put in your junk folder?

Example You ve been keeping track of the last 1000 emails you received. You find that 100 of them are spam. You also find that 200 of them were put in your junk folder, of which 90 were spam. What is the probability an email you receive is spam? P(X) =100 /1000 =.1 What is the probability an email you receive is put in your junk folder? P(Y) = 200 /1000 =.2

Example You ve been keeping track of the last 1000 emails you received. You find that 100 of them are spam. You also find that 200 of them were put in your junk folder, of which 90 were spam. What is the probability an email you receive is spam? P(X) =100 /1000 =.1 What is the probability an email you receive is put in your junk folder? P(Y) = 200 /1000 =.2 Given that an email is in your junk folder, what is the probability it is spam?

Example You ve been keeping track of the last 1000 emails you received. You find that 100 of them are spam. You also find that 200 of them were put in your junk folder, of which 90 were spam. What is the probability an email you receive is spam? P(X) =100 /1000 =.1 What is the probability an email you receive is put in your junk folder? P(Y) = 200 /1000 =.2 Given that an email is in your junk folder, what is the probability it is spam? P(X ÇY ) P(X Y ) = =.09 /.2 =.45 P(Y )

Example You ve been keeping track of the last 1000 emails you received. You find that 100 of them are spam. You also find that 200 of them were put in your junk folder, of which 90 were spam. What is the probability an email you receive is spam? P(X) =100 /1000 =.1 What is the probability an email you receive is put in your junk folder? P(Y) = 200 /1000 =.2 Given that an email is in your junk folder, what is the probability it is spam? P(X ÇY ) P(X Y ) = =.09 /.2 =.45 P(Y ) Given that an email is spam, what is the probability it is in your junk folder?

Example You ve been keeping track of the last 1000 emails you received. You find that 100 of them are spam. You also find that 200 of them were put in your junk folder, of which 90 were spam. What is the probability an email you receive is spam? P(X) =100 /1000 =.1 What is the probability an email you receive is put in your junk folder? P(Y) = 200 /1000 =.2 Given that an email is in your junk folder, what is the probability it is spam? P(X ÇY ) P(X Y ) = =.09 /.2 =.45 P(Y ) Given that an email is spam, what is the probability it is in your junk folder? P(Y X) = P(X ÇY ) P(X) =.09 /.1=.9

Deriving Bayes Rule P(X Y ) = P(Y X) = P(X ÇY ) P(Y ) P(X ÇY ) P(X) Bayes rule : P(X Y ) = P(Y X)P(X) P(Y )

General Application to Data Models In machine learning we have a space H of hypotheses: h 1, h 2,..., h n (possibly infinite) We also have a set D of data We want to calculate P(h D) Bayes rule gives us: P( h D) P( D h) P( h) P( D)

Prior probability of h: Terminology P(h): Probability that hypothesis h is true given our prior knowledge If no prior knowledge, all h H are equally probable Posterior probability of h: P(h D): Probability that hypothesis h is true, given the data D. Likelihood of D: P(D h): Probability that we will see data D, given hypothesis h is true. Marginal likelihood of D P(D) = å h P(D h)p(h)

A Bayesian Approach to the Monty Hall Problem You are a contestant on a game show. There are 3 doors, A, B, and C. There is a new car behind one of them and goats behind the other two. Monty Hall, the host, knows what is behind the doors. He asks you to pick a door, any door. You pick door A. Monty tells you he will open a door, different from A, that has a goat behind it. He opens door B: behind it there is a goat. Monty now gives you a choice: Stick with your original choice A or switch to C.

Bayesian probability formulation Hypothesis space H: h 1 = Car is behind door A h 2 = Car is behind door B h 3 = Car is behind door C Data D: After you picked door A, Monty opened B to show a goat Prior probability: P(h 1 ) = 1/3 P(h 2 ) =1/3 P(h 3 ) =1/3 Likelihood: P(D h 1 ) = 1/2 P(D h 2 ) = 0 P(D h 3 ) = 1 What is P(h 1 D)? What is P(h 2 D)? What is P(h 3 D)? Marginal likelihood: P(D) = p(d h 1 )p(h 1 ) + p(d h 2 )p(h 2 ) + p(d h 3 )p(h 3 ) = 1/6 + 0 + 1/3 = 1/2

By Bayes rule: P(h 1 D) = P(D h 1 )P(h 1 ) P(D) æ = 1 ö ç è 2 ø æ ç 1ö è 3 ø (2) = 1 3 P(h 2 D) = P(D h 2)P(h 2 ) P(D) ( ) 1 è 3 = 0 æ ç ö ø (2) = 0 P(h 3 D) = P(D h 3 )P(h 3 ) P(D) ( ) 1 è 3 = 1 æ ç ö ø (2) = 2 3 So you should switch!

MAP ( maximum a posteriori ) Learning Bayes rule: P( h D) P( D h) P( h) P( D) Goal of learning: Find maximum a posteriori hypothesis h MAP : h MAP = argmax hîh P(h D) = argmax hîh P(D h)p(h) P(D) = argmax hîh P(D h)p(h) because P(D) is a constant independent of h.

Note: If every h H is equally probable, then h MAP argmax h H P( D h) h MAP is called the maximum likelihood hypothesis.

A Medical Example Toby takes a test for leukemia. The test has two outcomes: positive and negative. It is known that if the patient has leukemia, the test is positive 98% of the time. If the patient does not have leukemia, the test is positive 3% of the time. It is also known that 0.008 of the population has leukemia. Toby s test is positive. Which is more likely: Toby has leukemia or Toby does not have leukemia?

Hypothesis space: h 1 = T. has leukemia h 2 = T. does not have leukemia Prior: 0.008 of the population has leukemia. Thus P(h 1 ) = 0.008 P(h 2 ) = 0.992 Likelihood: P(+ h 1 ) = 0.98, P( h 1 ) = 0.02 P(+ h 2 ) = 0.03, P( h 2 ) = 0.97 Posterior knowledge: Blood test is + for this patient.

In summary Thus: h MAP = P(h 1 ) = 0.008, P(h 2 ) = 0.992 P(+ h 1 ) = 0.98, P( h 1 ) = 0.02 P(+ h 2 ) = 0.03, P( h 2 ) = 0.97 argmax P(D h)p(h) hîh P(+ leukemia)p(leukemia) = (0.98)(0.008) = 0.0078 P(+ Øleukemia)P(Øleukemia) = (0.03)(0.992) = 0.0298 h MAP = Øleukemia

What is P(leukemia +)? P( h D) P( D h) P( h) P( D) So, P(leukemia +) = 0.0078 0.0078 + 0.0298 = 0.21 P(Øleukemia +) = 0.0298 0.0078 + 0.0298 = 0.79 These are called the posterior probabilities.

Bayesianism vs. Frequentism Classical probability: Frequentists Probability of a particular event is defined relative to its frequency in a sample space of events. E.g., probability of the coin will come up heads on the next trial is defined relative to the frequency of heads in a sample space of coin tosses. Bayesian probability: Combine measure of prior belief you have in a proposition with your subsequent observations of events. Example: Bayesian can assign probability to statement There was life on Mars a billion years ago but frequentist cannot.

Independence and Conditional Independence Recall that two random variables, X and Y, are independent if Two random variables, X and Y, are independent given C if ) ( ) ( ), ( Y P X P Y X P ) ( ) ( ), ( C Y P C X P C Y X P

Naive Bayes Classifier Let f (x) be a target function for classification: f (x) {+1, 1}. Let x = (x 1, x 2,..., x n ) We want to find the most probable class value, h MAP, given the data x: class MAP = argmax P(class D) class Î {+1,-1} = argmax P(class x 1, x 2,..., x n ) class Î {+1,-1}

By Bayes Theorem: class MAP = argmax P(x, x,..., x class)p(class) 1 2 n class Î {+1,-1} P(x 1, x 2,..., x n ) = argmax P(x 1, x 2,..., x n class)p(class) class Î {+1,-1} P(class) can be estimated from the training data. How? However, in general, not practical to use training data to estimate P(x 1, x 2,..., x n class). Why not?

Naive Bayes classifier: Assume P(x 1, x 2,..., x n class) = P(x 1 class)p(x 2 class) Is this a good assumption? P(x n class) Given this assumption, here s how to classify an instance x = (x 1, x 2,...,x n ): Naive Bayes classifier: class NB (x) = argmax P(class) class Î {+1,-1} Õ i P(x i class) To train: Estimate the values of these various probabilities over the training set.

Training data: Day Outlook Temp Humidity Wind PlayTennis D1 Sunny Hot High Weak No D2 Sunny Hot High Strong No D3 Overcast Hot High Weak Yes D4 Rain Mild High Weak Yes D5 Rain Cool Normal Weak Yes D6 Rain Cool Normal Strong No D7 Overcast Cool Normal Strong Yes D8 Sunny Mild High Weak No D9 Sunny Cool Normal Weak Yes D10 Rain Mild Normal Weak Yes D11 Sunny Mild Normal Strong Yes D12 Overcast Mild High Strong Yes D13 Overcast Hot Normal Weak Yes D14 Rain Mild High Strong No Test data: D15 Sunny Cool High Strong?

Use training data to compute a probabilistic model: P(Outlook = Sunny Yes) = 2 / 9 P(Outlook = Sunny No) = 3 / 5 P(Outlook = Overcast Yes) = 4 / 9 P(Outlook = Overcast No) = 0 P(Outlook = Rain Yes) = 3 / 9 P(Outlook = Rain No) = 2 / 5 P(Temperature = Hot Yes) = 2 / 9 P(Temperature = Hot No) = 2 / 5 P(Temperature = Mild Yes) = 4 / 9 P(Temperature = Mild No) = 2 / 5 P(Temperature = Cool Yes) = 3 / 9 P(Temperature = Cool No) =1/ 5 P(Humidity = High Yes) = 3 / 9 P(Humidity = High No) = 4 / 5 P(Humidity = Normal Yes) = 6 / 9 P(Humidity = Normal No) =1/ 5 P(Wind = Strong Yes) = 3 / 9 P(Wind = Strong No) = 3 / 5 P(Wind = Weak Yes) = 6 / 9 P(Wind = Weak No) = 2 / 5

Use training data to compute a probabilistic model: P(Outlook = Sunny Yes) = 2 / 9 P(Outlook = Sunny No) = 3 / 5 P(Outlook = Overcast Yes) = 4 / 9 P(Outlook = Overcast No) = 0 P(Outlook = Rain Yes) = 3 / 9 P(Outlook = Rain No) = 2 / 5 P(Temperature = Hot Yes) = 2 / 9 P(Temperature = Hot No) = 2 / 5 P(Temperature = Mild Yes) = 4 / 9 P(Temperature = Mild No) = 2 / 5 P(Temperature = Cool Yes) = 3 / 9 P(Temperature = Cool No) =1/ 5 P(Humidity = High Yes) = 3 / 9 P(Humidity = High No) = 4 / 5 P(Humidity = Normal Yes) = 6 / 9 P(Humidity = Normal No) =1/ 5 P(Wind = Strong Yes) = 3 / 9 P(Wind = Strong No) = 3 / 5 P(Wind = Weak Yes) = 6 / 9 P(Wind = Weak No) = 2 / 5 Day Outlook Temp Humidity Wind PlayTennis D15 Sunny Cool High Strong?

Use training data to compute a probabilistic model: P(Outlook = Sunny Yes) = 2 / 9 P(Outlook = Sunny No) = 3 / 5 P(Outlook = Overcast Yes) = 4 / 9 P(Outlook = Overcast No) = 0 P(Outlook = Rain Yes) = 3 / 9 P(Outlook = Rain No) = 2 / 5 P(Temperature = Hot Yes) = 2 / 9 P(Temperature = Hot No) = 2 / 5 P(Temperature = Mild Yes) = 4 / 9 P(Temperature = Mild No) = 2 / 5 P(Temperature = Cool Yes) = 3 / 9 P(Temperature = Cool No) =1/ 5 P(Humidity = High Yes) = 3 / 9 P(Humidity = High No) = 4 / 5 P(Humidity = Normal Yes) = 6 / 9 P(Humidity = Normal No) =1/ 5 P(Wind = Strong Yes) = 3 / 9 P(Wind = Strong No) = 3 / 5 P(Wind = Weak Yes) = 6 / 9 P(Wind = Weak No) = 2 / 5 Day Outlook Temp Humidity Wind PlayTennis D15 Sunny Cool High Strong? class NB (x) = argmax P(class) class Î {+1,-1} Õ i P(x i class)

Estimating probabilities / Smoothing Recap: In previous example, we had a training set and a new example, (Outlook=sunny, Temperature=cool, Humidity=high, Wind=strong) We asked: What classification is given by a naive Bayes classifier? Let n c be the number of training instances with class c. Let n x i =a k c be the number of training instances with attribute value x i =a k and class c. Then: P(x i = a i c) = n c x i =a k n c

Problem with this method: If n c is very small, gives a poor estimate. E.g., P(Outlook = Overcast no) = 0.

Now suppose we want to classify a new instance: (Outlook=overcast, Temperature=cool, Humidity=high, Wind=strong) Then: P(no) Õ P(x i no) = 0 i This incorrectly gives us zero probability due to small sample.

One solution: Laplace smoothing (also called add-one smoothing) For each class c and attribute x i with value a k, add one virtual instance. That is, for each class c, recalculate: P(x i = a i c) = n c x i =a k +1 n c + K where K is the number of possible values of attribute a.

Training data: Day Outlook Temp Humidity Wind PlayTennis D1 Sunny Hot High Weak No D2 Sunny Hot High Strong No D3 Overcast Hot High Weak Yes D4 Rain Mild High Weak Yes D5 Rain Cool Normal Weak Yes D6 Rain Cool Normal Strong No D7 Overcast Cool Normal Strong Yes D8 Sunny Mild High Weak No D9 Sunny Cool Normal Weak Yes D10 Rain Mild Normal Weak Yes D11 Sunny Mild Normal Strong Yes D12 Overcast Mild High Strong Yes D13 Overcast Hot Normal Weak Yes D14 Rain Mild High Strong No Laplace smoothing: Add the following virtual instances for Outlook: Outlook=Sunny: Yes Outlook=Overcast: Yes Outlook=Rain: Yes Outlook=Sunny: No Outlook=Overcast: No Outlook=Rain: No P(Outlook = overcast No) = 0 5 n x i =a k c +1 n c + K = 0 +1 5+3 = 1 8 P(Outlook = overcast Yes) = 4 9 n c x i =a k +1 n c + K = 4 +1 9 +3 = 5 12

P(Outlook = Sunny Yes) = 2 / 9 3 /12 P(Outlook = Sunny No) = 3 / 5 4 / 8 P(Outlook = Overcast Yes) = 4 / 9 5 /12 P(Outlook = Overcast No) = 0 / 5 1/ 8 P(Outlook = Rain Yes) = 3 / 9 4 /12 P(Outlook = Rain No) = 2 / 5 3 / 8 Etc.

In-class exercise 1. Recall the Naïve Bayes Classifier class NB (x) = argmax P(class) class Î {+1,-1} Õ Consider this training set, in which each instance has four binary features and a binary class: i P(x i class) 2. Recall the formula for Laplace smoothing: Instance x1 x2 x3 x4 Class x 1 1 1 1 1 POS x 2 1 1 0 1 POS x 3 0 1 1 1 POS x 4 1 0 0 1 POS where K is the number of possible values of attribute a. (a) Apply Laplace smoothing to all the probabilities from the training set in question 1. x 5 1 0 0 0 NEG x 6 1 0 1 0 NEG x 7 0 1 0 1 NEG (b) Use the smoothed probabilities to determine class NB for the following new instances: (a) Create a probabilistic model that you could use to classify new instances. That is, calculate P(class) and P(xi class) for each class. No smoothing is needed (yet). Instance x1 x2 x3 x4 Class x 10 0 1 0 0 x 11 0 0 0 0 (b) Use your probabilistic model to determine classnb for the following new instance: Instance x1 x2 x3 x4 Class x 8 0 0 0 1

Naive Bayes on continuousvalued attributes How to deal with continuous-valued attributes? Two possible solutions: Discretize Assume particular probability distribution of classes over values (estimate parameters from training data)

Discretization: Equal-Width Binning For each attribute x i, create k equal-width bins in interval from min(x i ) to max(x i ). The discrete attribute values are now the bins. Questions: What should k be? What if some bins have very few instances? Problem with balance between discretization bias and variance. The more bins, the lower the bias, but the higher the variance, due to small sample size.

Discretization: Equal-Frequency Binning For each attribute x i, create k bins so that each bin contains an equal number of values. Also has problems: What should k be? Hides outliers. Can group together instances that are far apart.

Gaussian Naïve Bayes Assume that within each class, values of each numeric feature are normally distributed: where μ i,c is the mean of feature i given the class c, and σ i,c is the standard deviation of feature i given the class c We estimate μ i,c and σ i,c from training data.

Example x 1 x 2 Class 3.0 5.1 POS 4.1 6.3 POS 7.2 9.8 POS 2.0 1.1 NEG 4.1 2.0 NEG 8.1 9.4 NEG

Example x 1 x 2 Class 3.0 5.1 POS 4.1 6.3 POS 7.2 9.8 POS 2.0 1.1 NEG 4.1 2.0 NEG 8.1 9.4 NEG P(POS) = 0.5 P(NEG) = 0.5

N 1,POS = N(x; 4.8, 1.8) http://homepage.stat.uiowa.edu/~mbognar/applets/normal.html N 2,POS = N(x; 7.1, 2.0) N 1,NEG = N(x; 4.7, 2.5) N 2,NEG = N(x; 4.2, 3.7)

Now, suppose you have a new example x, with x 1 = 5.2, x 2 = 6.3. What is class NB (x)?

Now, suppose you have a new example x, with x 1 = 5.2, x 2 = 6.3. What is class NB (x)? class NB (x) = argmax P(class) class Î {+1,-1} Õ i P(x i class) Note: N is the probability density function, but can be used analogously to probability in Naïve Bayes calculations.

Now, suppose you have a new example x, with x 1 = 5.2, x 2 = 6.3. What is class NB (x)? class NB (x) = argmax P(class) class Î {+1,-1} Õ i P(x i class)

Positive : P(POS)P(x 1 POS)P(x 2 POS) = (.5)(.22)(.18) =.02 Negative : P(NEG)P(x 1 NEG)P(x 2 NEG) = (.5)(.16)(.09) =.0072 class NB (x) = POS

Use logarithms to avoid underflow class NB (x) = argmax P(class) class Î {+1,-1} Õ i P(x i class) = æ argmax logçp(class) è class Î {+1,-1} Õ i P(x i ö class) ø = æ ö argmax çlog P(class)+ log P(x i class) è ø class Î {+1,-1} å i

Bayes Nets

Another example A patient comes into a doctor s office with a bad cough and a high fever. Hypothesis space H: Data D: h 1 : patient has flu h 2 : patient does not have flu coughing = true, fever = true Prior probabilities: Likelihoods Prob. of data p(h 1 ) =.1 p(d h1) =.8 P(D) = p(h 2 ) =. 9 p(d h2) =.4 Posterior probabilities: P(h 1 D) = P(h 2 D) =

Let s say we have the following random variables: cough fever flu smokes

Full joint probability distribution smokes cough cough Fever Fever Fever Fever Sum of all boxes is 1. flu p 1 p 2 p 3 p 4 flu p 5 p 6 p 7 p 8 smokes cough cough fever fever fever fever flu p 9 p 10 p 11 p 12 flu p 13 p 14 p 15 p 16 In principle, the full joint distribution can be used to answer any question about probabilities of these combined parameters. However, size of full joint distribution scales exponentially with number of parameters so is expensive to store and to compute with.

Bayesian networks Idea is to represent dependencies (or causal relations) for all the variables so that space and computation-time requirements are minimized. smokes flu cough fever Graphical Models

smoke true 0.2 false 0.8 smoke flu smoke cough true false True True 0.95 0.05 True False 0.8 0.2 False True 0.6 0.4 false false 0.05 0.95 flu Conditional probability tables for each node flu true 0.01 false 0.99 cough fever flu fever true false true 0.9 0.1 false 0.2 0.8

Semantics of Bayesian networks If network is correct, can calculate full joint probability distribution from network. P(( X 1 x 1 ) ( X 2 x 2 )... ( X n x n )) n i 1 P( X i x i parents( X i )) where parents(x i ) denotes specific values of parents of X i.

Example Calculate P[( cough t) ( fever f ) ( flu f ) ( smoke f )]

Example Calculate P[( cough t) ( fever f ) ( flu f ) ( smoke f )]

Different types of inference in Bayesian Networks Causal inference Evidence is cause, inference is probability of effect Example: Instantiate evidence flu = true. What is P(fever flu)? P( fever flu).9 (up from.207)

Diagnostic inference Evidence is effect, inference is probability of cause Example: Instantiate evidence fever = true. What is P(flu fever)? P( fever flu) P( flu) (.9)(.01) P( flu fever).043 P( fever).207 (up from.01)

Example: What is P(flu cough)?.0497.167 (.8)(.8)](.01) [(.95)(.2) ) ( ) ( )] ( ), ( ) ( ), ( [ ) ( ) ( ) ( ) ( cough p flu P smoke p smoke flu cough P smoke p smoke flu cough P cough P flu P flu cough P cough flu P

Inter-causal inference Explain away different possible causes of effect Example: What is P(flu cough,smoke)? P( flu cough, smoke) p( flu cough smoke) p( cough smoke) p( cough (.95)(.01)(.2) /[(.95)(.01)(.2) (.6)(.2)(.99)] 0. 016 p( cough flu, smoke) p( flu) p( smoke) flu, smoke) p( flu) p( smoke) p( cough smoke, flu) p( smoke) p( flu) Why is P(flu cough,smoke) < P(flu cough)?

Complexity of Bayesian Networks For n random Boolean variables: Full joint probability distribution: 2 n entries Bayesian network with at most k parents per node: Each conditional probability table: at most 2 k entries Entire network: n 2 k entries

What are the advantages of Bayesian networks? Intuitive, concise representation of joint probability distribution (i.e., conditional dependencies) of a set of random variables. Represents beliefs and knowledge about a particular class of situations. Efficient (?) (approximate) inference algorithms Efficient, effective learning algorithms

Issues in Bayesian Networks Building / learning network topology Assigning / learning conditional probability tables Approximate inference via sampling

Real-World Example: The Lumière Project at Microsoft Research Bayesian network approach to answering user queries about Microsoft Office. At the time we initiated our project in Bayesian information retrieval, managers in the Office division were finding that users were having difficulty finding assistance efficiently. As an example, users working with the Excel spreadsheet might have required assistance with formatting a graph. Unfortunately, Excel has no knowledge about the common term, graph, and only considered in its keyword indexing the term chart.

Networks were developed by experts from user modeling studies.

Offspring of project was Office Assistant in Office 97.

flu smoke fever cough headache nausea ) ( ) ( ) ( ) ( ) ( ) ( ) ( flu headache P flu nausea P flu cough P flu fever P flu smoke P flu P headache nausea cough fever smoke flu P

flu smoke fever cough headache nausea Naive Bayes ) ( ) ( ),..., ( : tion for classifica More generally, 1 1 1 j i i i j n j c C x X P c C P x X x X c C P

Learning network topology Many different approaches, including: Heuristic search, with evaluation based on information theory measures Genetic algorithms Using meta Bayesian networks!

Learning conditional probabilities In general, random variables are not binary, but real-valued Conditional probability tables conditional probability distributions Estimate parameters of these distributions from data

Approximate inference via sampling Recall: We can calculate full joint probability distribution from network. P( X,..., X ) d P( X parents( X 1 d i i i 1 where parents(x i ) denotes specific values of parents of X i. )) We can do diagnostic, causal, and inter-causal inference But if there are a lot of nodes in the network, this can be very slow! Need efficient algorithms to do approximate calculations!

A Précis of Sampling Algorithms Gibbs Sampling Suppose that we want to sample from: p x θ, x R d Basic Idea: Sample, sequentially from full conditionals. Initialize: x i : i = 1,..., D For t=1 T t sample : x ~ p x x t sample : x ~ p x x t sample : x ~ p x x ( 1) ( t) 1 1 ( 1) ( 1) ( t) 2 2 ( 2) ( 1) ( t) D D ( D) Complications: (i) How to order samples (can be random, but be careful); (ii) Need full conditionals (can use approximation). Under nice assumptions (ergodicity), we get sample: x ~ p x

A Précis of Sampling Algorithms Sampling Algorithms More Generally: How do we perform posterior inference more generally? Oftentimes, we rely on strong parametric assumptions (e.g. conjugacy, exponential family structures). Monte Carlo Approximation/Inference can get around this. Basic Idea: (1) Draw samples x s ~p(x θ). (2) Compute quantity of interest, e.g., (marginals): 1 E p x1 D x1, i, etc. S S 1 s In general: E f x f x p x dx f x S S

A Précis of Sampling Algorithms Sampling Algorithms More Generally: CDF Method (MC technique) Steps: (1) sample u~u(0,1) (2) F -1 (U)~F

A Précis of Sampling Algorithms Sampling Algorithms More Generally: Rejection Sampling (MC technique) One can show that x~p(x) Issues: need a good q(x), c and rejection rate can grow astronomically! Pros of MC sampling: samples are independent; Cons: very inefficient in high dimensions. Alternatively, one can use MCMC methods.

Markov Chain Monte Carlo Sampling One of most common methods used in real applications. Recall that: By construction of Bayesian network, a node is conditionally independent of its non-descendants, given its parents. Also recall that: a node can be conditionally dependent on its children and on the other parents of its children. (Why?) Definition: The Markov blanket of a variable X i is X i s parents, children, and children s other parents.

Example What is the Markov blanket of cough? of flu? smokes flu cough fever

Theorem: A node X i is conditionally independent of all other nodes in the network, given its Markov blanket.

Markov Chain Monte Carlo (MCMC) Sampling Start with random sample from variables: (x 1,..., x n ). This is the current state of the algorithm. Next state: Randomly sample value for one non-evidence variable X i, conditioned on current values in Markov Blanket of X i.

Example Query: What is P(cough smoke)? MCMC: Random sample, with evidence variables fixed: flu smoke fever cough truetrue false true Repeat: 1. Sample flu probabilistically, given current values of its Markov blanket: smoke = true, fever = false, cough = true Suppose result is false. New state: flu smoke fever cough false true false true

2. Sample cough, given current values of its Markov blanket: smoke = true, flu = false Suppose result is true. New state: flu smoke fever cough false true false true 3. Sample fever, given current values of its Markov blanket: flu = false Suppose result is true. New state: flu smoke fever cough false true true true

Each sample contributes to estimate for query P(cough smoke) Suppose we perform 100 such samples, 20 with cough = true and 80 with cough= false. Then answer to the query is P(cough smoke) =.20 Theorem: MCMC settles into behavior in which each state is sampled exactly according to its posterior probability, given the evidence.

Applying Bayesian Reasoning to Speech Recognition Task: Identify sequence of words uttered by speaker, given acoustic signal. Uncertainty introduced by noise, speaker error, variation in pronunciation, homonyms, etc. Thus speech recognition is viewed as problem of probabilistic inference.

So far, we ve looked at probabilistic reasoning in static environments. Speech: Time sequence of static environments. Let X be the state variables (i.e., set of non-evidence variables) describing the environment (e.g., Words said during time step t) Let E be the set of evidence variables (e.g., S = features of acoustic signal).

The E values and X joint probability distribution changes over time. t 1 : X 1, e 1 t 2 : X 2, e 2 etc.

At each t, we want to compute P(Words S). We know from Bayes rule: P( Words S) P( S Words) P( Words) P(S Words), for all words, is a previously learned acoustic model. E.g. For each word, probability distribution over phones, and for each phone, probability distribution over acoustic signals (which can vary in pitch, speed, volume). P(Words), for all words, is the language model, which specifies prior probability of each utterance. E.g. bigram model : probability of each word following each other word.

Speech recognition typically makes three assumptions: 1. Process underlying change is itself stationary i.e., state transition probabilities don t change 2. Current state X depends on only a finite history of previous states ( Markov assumption ). Markov process of order n: Current state depends only on n previous states. 3. Values e t of evidence variables depend only on current state X t. ( Sensor model )

Hidden Markov Models Markov model: Given state X t, what is probability of transitioning to next state X t+1? E.g., word bigram probabilities give P (word t+1 word t ) Hidden Markov model: There are observable states (e.g., signal S) and hidden states (e.g., Words). HMM represents probabilities of hidden states given observable states.

Example: I m firsty, um, can I have something to dwink?

Graphical Models and Computer Vision