Two Roles for Bayesian Methods

Similar documents
CSCE 478/878 Lecture 6: Bayesian Learning

Notes on Machine Learning for and

Machine Learning. Bayesian Learning.

Bayesian Learning. Remark on Conditional Probabilities and Priors. Two Roles for Bayesian Methods. [Read Ch. 6] [Suggested exercises: 6.1, 6.2, 6.

Bayesian Learning. Two Roles for Bayesian Methods. Bayes Theorem. Choosing Hypotheses

Machine Learning. Bayesian Learning. Acknowledgement Slides courtesy of Martin Riedmiller

Bayesian Learning. Bayes Theorem. MAP, MLhypotheses. MAP learners. Minimum description length principle. Bayes optimal classier. Naive Bayes learner

BAYESIAN LEARNING. [Read Ch. 6] [Suggested exercises: 6.1, 6.2, 6.6]

Stephen Scott.

Bayesian Learning. Examples. Conditional Probability. Two Roles for Bayesian Methods. Prior Probability and Random Variables. The Chain Rule P (B)

Bayesian Learning. CSL603 - Fall 2017 Narayanan C Krishnan

CSCE 478/878 Lecture 6: Bayesian Learning and Graphical Models. Stephen Scott. Introduction. Outline. Bayes Theorem. Formulas

Bayesian Learning. Chapter 6: Bayesian Learning. Bayes Theorem. Roles for Bayesian Methods. CS 536: Machine Learning Littman (Wu, TA)

Lecture 9: Bayesian Learning

MODULE -4 BAYEIAN LEARNING

Bayesian Learning Features of Bayesian learning methods:

Statistical learning. Chapter 20, Sections 1 4 1

COMP 551 Applied Machine Learning Lecture 5: Generative models for linear classification

Machine Learning (CS 567)

Topics. Bayesian Learning. What is Bayesian Learning? Objectives for Bayesian Learning

Bayesian Learning. Bayesian Learning Criteria

Naïve Bayes classification

Co-training and Learning with Noise

Uncertainty. Variables. assigns to each sentence numerical degree of belief between 0 and 1. uncertainty

Naïve Bayes classification. p ij 11/15/16. Probability theory. Probability theory. Probability theory. X P (X = x i )=1 i. Marginal Probability

Algorithmisches Lernen/Machine Learning

Bayesian Learning Extension

Bayesian Learning. Artificial Intelligence Programming. 15-0: Learning vs. Deduction

Probability Based Learning

Statistical Learning. Philipp Koehn. 10 November 2015

CS6220: DATA MINING TECHNIQUES

Relationship between Least Squares Approximation and Maximum Likelihood Hypotheses

Introduction to Bayesian Learning. Machine Learning Fall 2018

Introduction to ML. Two examples of Learners: Naïve Bayesian Classifiers Decision Trees

CMPT Machine Learning. Bayesian Learning Lecture Scribe for Week 4 Jan 30th & Feb 4th

[read Chapter 2] [suggested exercises 2.2, 2.3, 2.4, 2.6] General-to-specific ordering over hypotheses

9/12/17. Types of learning. Modeling data. Supervised learning: Classification. Supervised learning: Regression. Unsupervised learning: Clustering

Learning with Probabilities

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2016

Statistical learning. Chapter 20, Sections 1 3 1

Notes on Machine Learning for and

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2014

COMP 328: Machine Learning

Outline. Training Examples for EnjoySport. 2 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997

CS6220: DATA MINING TECHNIQUES

Naïve Bayes. Jia-Bin Huang. Virginia Tech Spring 2019 ECE-5424G / CS-5824

Probabilistic classification CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2016

Logistic Regression. Jia-Bin Huang. Virginia Tech Spring 2019 ECE-5424G / CS-5824

Midterm Review CS 6375: Machine Learning. Vibhav Gogate The University of Texas at Dallas

Question of the Day. Machine Learning 2D1431. Decision Tree for PlayTennis. Outline. Lecture 4: Decision Tree Learning

Ruslan Salakhutdinov Joint work with Geoff Hinton. University of Toronto, Machine Learning Group

Machine Learning, Midterm Exam: Spring 2009 SOLUTION

From inductive inference to machine learning

Expectation Maximization, and Learning from Partly Unobserved Data (part 2)

Introduction to Bayesian Learning

Generative Clustering, Topic Modeling, & Bayesian Inference

Mathematical Formulation of Our Example

The Naïve Bayes Classifier. Machine Learning Fall 2017

Statistical learning. Chapter 20, Sections 1 3 1

Stephen Scott.

CS 446 Machine Learning Fall 2016 Nov 01, Bayesian Learning

Lecture 2: Foundations of Concept Learning

Decision Tree Learning

Lecture 24: Other (Non-linear) Classifiers: Decision Tree Learning, Boosting, and Support Vector Classification Instructor: Prof. Ganesh Ramakrishnan

Final Exam, Fall 2002

Bayesian Methods: Naïve Bayes

CSE 473: Artificial Intelligence Autumn Topics

MIDTERM: CS 6375 INSTRUCTOR: VIBHAV GOGATE October,

Expectation Maximization (EM)

Midterm Review CS 7301: Advanced Machine Learning. Vibhav Gogate The University of Texas at Dallas

Lecture : Probabilistic Machine Learning

Machine Learning, Fall 2012 Homework 2

Confusion matrix. a = true positives b = false negatives c = false positives d = true negatives 1. F-measure combines Recall and Precision:

Markov Networks. l Like Bayes Nets. l Graphical model that describes joint probability distribution using tables (AKA potentials)

CHAPTER-17. Decision Tree Induction

Classification CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2012

Lecture 3: Decision Trees

arxiv: v1 [cs.lg] 5 Jul 2010

Bayesian Networks BY: MOHAMAD ALSABBAGH

Bayesian Inference. Definitions from Probability: Naive Bayes Classifiers: Advantages and Disadvantages of Naive Bayes Classifiers:

Lecture 3: Decision Trees

Markov Networks. l Like Bayes Nets. l Graph model that describes joint probability distribution using tables (AKA potentials)

Last Time. Today. Bayesian Learning. The Distributions We Love. CSE 446 Gaussian Naïve Bayes & Logistic Regression

LEARNING AND REASONING ON BACKGROUND NETS FOR TEXT CATEGORIZATION WITH CHANGING DOMAIN AND PERSONALIZED CRITERIA

Machine Learning. Bayesian Learning. Michael M. Richter. Michael M. Richter

Bayes Rule. CS789: Machine Learning and Neural Network Bayesian learning. A Side Note on Probability. What will we learn in this lecture?

Introduction. Decision Tree Learning. Outline. Decision Tree 9/7/2017. Decision Tree Definition

CSCI-567: Machine Learning (Spring 2019)

Introduction to Machine Learning

Introduction to Machine Learning. Introduction to ML - TAU 2016/7 1

Stochastic Gradient Descent

Decision Trees.

Introduction to Machine Learning

CSC321 Lecture 18: Learning Probabilistic Models

Supervised Learning! Algorithm Implementations! Inferring Rudimentary Rules and Decision Trees!

Naïve Bayes Classifiers

BITS F464: MACHINE LEARNING

Introduction to Machine Learning. Maximum Likelihood and Bayesian Inference. Lecturers: Eran Halperin, Lior Wolf

Bayesian Networks Inference with Probabilistic Graphical Models

Decision-Tree Learning. Chapter 3: Decision Tree Learning. Classification Learning. Decision Tree for PlayTennis

Transcription:

Bayesian Learning Bayes Theorem MAP, ML hypotheses MAP learners Minimum description length principle Bayes optimal classifier Naive Bayes learner Example: Learning over text data Bayesian belief networks Expectation Maximization algorithm 1

Two Roles for Bayesian Methods Provides practical learning algorithms: Naive Bayes learning Bayesian belief network learning Combine prior knowledge (prior probabilities) with observed data Requires prior probabilities Provides useful conceptual framework Provides gold standard for evaluating other learning algorithms Additional insight into Occam s razor 2

Bayes Theorem P (D h)p (h) P (h D) = P (D) P (h) = prior probability of hypothesis h P (D) = prior probability of training data D P (h D) = probability of h given D P (D h) = probability of D given h 3

Choosing Hypotheses P (D h)p (h) P (h D) = P (D) Generally want the most probable hypothesis given the training data Maximum a posteriori hypothesis h MAP : h MAP = arg max P (h D) h H P (D h)p (h) = arg max h H P (D) = arg max P (D h)p (h) h H If assume P (h i )=P (h j ) then can further simplify, and choose the Maximum likelihood (ML) hypothesis h ML = arg max h i H P (D h i) 4

Bayes Theorem Does patient have cancer or not? A patient takes a lab test and the result comes back positive. The test returns a correct positive result in only 98% of the cases in which the disease is actually present, and a correct negative result in only 97% of the cases in which the disease is not present. Furthermore,.008 of the entire population have this cancer. P (cancer) = P ( cancer) = P (+ cancer) = P ( cancer) = P (+ cancer) = P ( cancer) = 5

Basic Formulas for Probabilities Product Rule: probability P (A B) ofa conjunction of two events A and B: P (A B) =P (A B)P (B) =P (B A)P (A) Sum Rule: probability of a disjunction of two events A and B: P (A B) =P (A)+P (B) P (A B) Theorem of total probability: ifeventsa 1,...,A n are mutually exclusive with n i=1 P (A i ) = 1, then P (B) = n i=1 P (B A i)p (A i ) 6

Brute Force MAP Hypothesis Learner 1. For each hypothesis h in H, calculate the posterior probability P (h D) = P (D h)p (h) P (D) 2. Output the hypothesis h MAP with the highest posterior probability h MAP = argmax h H P (h D) 7

Relation to Concept Learning Consider our usual concept learning task instance space X, hypothesis space H, training examples D consider the FindS learning algorithm (outputs most specific hypothesis from the version space VS H,D ) What would Bayes rule produce as the MAP hypothesis? Does F inds output a MAP hypothesis?? 8

Relation to Concept Learning Assume fixed set of instances x 1,...,x m Assume D is the set of classifications D = c(x 1 ),...,c(x m ) Choose P (D h): 9

Relation to Concept Learning Assume fixed set of instances x 1,...,x m Assume D is the set of classifications D = c(x 1 ),...,c(x m ) Choose P (D h) P (D h) = 1 if h consistent with D P (D h) = 0 otherwise Choose P (h) tobeuniform distribution P (h) = 1 H for all h in H Then, P (h D) = 1 VS H,D if h is consistent with D 0 otherwise 10

Evolution of Posterior Probabilities P( h) P(h D1) P(h D1, D2) hypotheses hypotheses hypotheses ( a) ( b) ( c) 11

Characterizing Learning Algorithms by Equivalent MAP Learners Inductive system Training examples D Hypothesis space H Candidate Elimination Algorithm Output hypotheses Equivalent Bayesian inference system Training examples D Hypothesis space H P(h) uniform P(D h) = 0 if inconsistent, = 1 if consistent Brute force MAP learner Output hypotheses Prior assumptions made explicit 12

Learning A Real Valued Function y f e h ML Consider any real-valued target function f Training examples x i,d i, where d i is noisy training value x d i = f(x i )+e i e i is random variable (noise) drawn independently for each x i according to some Gaussian distribution with mean=0 Then the maximum likelihood hypothesis h ML is the one that minimizes the sum of squared errors: h ML = arg min h H m (d i h(x i )) 2 i=1 13

Learning A Real Valued Function h ML = argmax h H = argmax h H = argmax h H p(d h) m i=1 p(d i h) m i=1 1 2πσ 2 e 1 2 (d i h(x i ) σ ) 2 Maximize natural log of this instead... h ML = argmax h H = argmax h H = argmax h H = argmin h H m ln 1 1 i=1 2πσ 2 2 m 1 d i h(x i ) i=1 2 σ m (d i h(x i )) 2 i=1 m (d i h(x i )) 2 i=1 d i h(x i ) σ 2 2 14

Learning to Predict Probabilities Consider predicting survival probability from patient data Training examples x i,d i, where d i is1or0 Want to train neural network to output a probability given x i (nota0or1) In this case can show h ML = argmax h H m i=1 d i ln h(x i )+(1 d i )ln(1 h(x i )) Weight update rule for a sigmoid unit: where w jk w jk + w jk w jk = η m i=1 (d i h(x i )) x ijk 15

Minimum Description Length Principle Occam s razor: prefer the shortest hypothesis MDL: prefer the hypothesis h that minimizes h MDL = argmin L C1 (h)+l C2 (D h) h H where L C (x) is the description length of x under encoding C Example: H = decision trees, D = training data labels L C1 (h) is # bits to describe tree h L C2 (D h) is # bits to describe D given h Note L C2 (D h) = 0 if examples classified perfectly by h. Need only describe exceptions Hence h MDL trades off tree size for training errors 16

Minimum Description Length Principle h MAP = arg max P (D h)p (h) h H = arg max log h H 2 P (D h) + log 2 P (h) = arg min log h H 2 P (D h) log 2 P (h) (1) Interesting fact from information theory: The optimal (shortest expected coding length) code for an event with probability p is log 2 p bits. So interpret (1): log 2 P (h) is length of h under optimal code log 2 P (D h) is length of D given h under optimal code prefer the hypothesis that minimizes length(h)+length(misclassifications) 17

Most Probable Classification of New Instances So far we ve sought the most probable hypothesis given the data D (i.e., h MAP ) Given new instance x, what is its most probable classification? h MAP (x) is not the most probable classification! Consider: Three possible hypotheses: P (h 1 D) =.4, P(h 2 D) =.3, P(h 3 D) =.3 Given new instance x, h 1 (x) =+,h 2 (x) =, h 3 (x) = What s most probable classification of x? 18

Bayes Optimal Classifier Bayes optimal classification: Example: arg max v j V h i H P (v j h i )P (h i D) therefore and P (h 1 D) =.4, P( h 1 )=0,P(+ h 1 )=1 P (h 2 D) =.3, P( h 2 )=1,P(+ h 2 )=0 P (h 3 D) =.3, P( h 3 )=1,P(+ h 3 )=0 h i H h i H arg max v j V P (+ h i )P (h i D) =.4 P ( h i )P (h i D) =.6 h i H P (v j h i )P (h i D) = 19

Gibbs Classifier Bayes optimal classifier provides best result, but can be expensive if many hypotheses. Gibbs algorithm: 1. Choose one hypothesis at random, according to P (h D) 2. Use this to classify new instance Surprising fact: Assume target concepts are drawn at random from H according to priors on H. Then: E[error Gibbs ] 2E[error BayesOptimal ] Suppose correct, uniform prior distribution over H, then Pick any hypothesis from VS, with uniform probability Its expected error no worse than twice Bayes optimal 20

Naive Bayes Classifier Along with decision trees, neural networks, nearest nbr, one of the most practical learning methods. When to use Moderate or large training set available Attributes that describe instances are conditionally independent given classification Successful applications: Diagnosis Classifying text documents 21

Naive Bayes Classifier Assume target function f : X V, where each instance x described by attributes a 1,a 2...a n. Most probable value of f(x) is: v MAP v MAP = argmax v j V = argmax v j V = argmax v j V Naive Bayes assumption: which gives P (v j a 1,a 2...a n ) P (a 1,a 2...a n v j )P (v j ) P (a 1,a 2...a n ) P (a 1,a 2...a n v j )P (v j ) P (a 1,a 2...a n v j )= i P (a i v j ) Naive Bayes classifier: v NB = argmax v j V P (v j ) i P (a i v j ) 22

Naive Bayes Algorithm Naive Bayes Learn(examples) For each target value v j ˆP (v j ) estimate P (v j ) For each attribute value a i of each attribute a ˆP (a i v j ) estimate P (a i v j ) Classify New Instance(x) v NB = argmax v j V ˆP (v j ) a i x ˆP (a i v j ) 23

Naive Bayes: Example Consider PlayTennis again, and new instance Outlk = sun, T emp = cool, Humid = high, W ind = strong Want to compute: v NB = argmax v j V P (v j ) i P (a i v j ) P (y) P (sun y) P (cool y) P (high y) P (strong y) =.005 P (n) P (sun n) P (cool n) P (high n) P (strong n) =.021 v NB = n 24

Naive Bayes: Subtleties 1. Conditional independence assumption is often violated P (a 1,a 2...a n v j )= i P (a i v j )...but it works surprisingly well anyway. Note don t need estimated posteriors ˆP (v j x) tobe correct; need only that argmax v j V ˆP (v j ) i ˆP (a i v j ) = argmax v j V P (v j )P (a 1...,a n v j ) see [Domingos & Pazzani, 1996] for analysis Naive Bayes posteriors often unrealistically close to1or0 25

Naive Bayes: Subtleties 2. what if none of the training instances with target value v j have attribute value a i?then ˆP (a i v j ) = 0, and... ˆP (v j ) i ˆP (a i v j )=0 Typical solution is Bayesian estimate for ˆP (a i v j ) ˆP (a i v j ) n c + mp n + m where n is number of training examples for which v = v j, n c number of examples for which v = v j and a = a i p is prior estimate for ˆP (a i v j ) m is weight given to prior (i.e. number of virtual examples) 26

Learning to Classify Text Why? Learn which news articles are of interest Learn to classify web pages by topic Naive Bayes is among most effective algorithms What attributes shall we use to represent text documents?? 27

Learning to Classify Text Target concept Interesting? :Document {+, } 1. Represent each document by vector of words one attribute per word position in document 2. Learning: Use training examples to estimate P (+) P ( ) P (doc +) P (doc ) Naive Bayes conditional independence assumption P (doc v j )= length(doc) i=1 P (a i = w k v j ) where P (a i = w k v j ) is probability that word in position i is w k, given v j one more assumption: P (a i = w k v j )=P (a m = w k v j ), i, m 28

Learn naive Bayes text(examples, V ) 1. collect all words and other tokens that occur in Examples Vocabulary all distinct words and other tokens in Examples 2. calculate the required P (v j ) and P (w k v j ) probability terms For each target value v j in V do docs j subset of Examples for which the target value is v j P (v j ) docs j Examples Text j a single document created by concatenating all members of docs j n total number of words in Text j (counting duplicate words multiple times) foreachwordw k in Vocabulary n k number of times word w k occurs in Text j P (w k v j ) n k+1 n+ V ocabulary 29

Classify naive Bayes text(doc) positions all word positions in Doc that contain tokens found in Vocabulary Return v NB, where v NB = argmax v j V P (v j ) P (a i v j ) i positions 30

Twenty NewsGroups Given 1000 training documents from each group Learn to classify new documents according to which newsgroup it came from comp.graphics comp.os.ms-windows.misc comp.sys.ibm.pc.hardware comp.sys.mac.hardware comp.windows.x alt.atheism soc.religion.christian talk.religion.misc talk.politics.mideast talk.politics.misc talk.politics.guns misc.forsale rec.autos rec.motorcycles rec.sport.baseball rec.sport.hockey sci.space sci.crypt sci.electronics sci.med Naive Bayes: 89% classification accuracy 31

Article from rec.sport.hockey Path: cantaloupe.srv.cs.cmu.edu!das-news.harvard.ed From: xxx@yyy.zzz.edu (John Doe) Subject: Re: This year s biggest and worst (opinion Date: 5 Apr 93 09:53:39 GMT I can only comment on the Kings, but the most obvious candidate for pleasant surprise is Alex Zhitnik. He came highly touted as a defensive defenseman, but he s clearly much more than that. Great skater and hard shot (though wish he were more accurate). In fact, he pretty much allowed the Kings to trade away that huge defensive liability Paul Coffey. Kelly Hrudey is only the biggest disappointment if you thought he was any good to begin with. But, at best, he s only a mediocre goaltender. A better choice would be Tomas Sandstrom, though not through any fault of his own, but because some thugs in Toronto decided 32

Learning Curve for 20 Newsgroups 100 90 80 70 60 50 40 30 20 10 0 20News Bayes TFIDF PRTFIDF 100 1000 10000 Accuracy vs. Training set size (1/3 withheld for test) 33

Bayesian Belief Networks Interesting because: Naive Bayes assumption of conditional independence too restrictive But it s intractable without some such assumptions... Bayesian Belief networks describe conditional independence among subsets of variables allows combining prior knowledge about (in)dependencies among variables with observed training data (also called Bayes Nets) 34

Conditional Independence Definition: X is conditionally independent of Y given Z if the probability distribution governing X is independent of the value of Y given the value of Z; thatis,if ( x i,y j,z k ) P (X = x i Y = y j,z = z k )=P (X = x i Z = z k ) more compactly, we write P (X Y,Z) =P (X Z) Example: T hunder is conditionally independent of Rain, given Lightning P (T hunder Rain, Lightning) =P (T hunder Lightning) Naive Bayes uses cond. indep. to justify P (X, Y Z) =P (X Y,Z)P (Y Z) = P (X Z)P (Y Z) 35

Bayesian Belief Network Storm BusTourGroup S,B S, B S,B S, B Lightning Campfire C C 0.4 0.6 0.1 0.9 0.8 0.2 0.2 0.8 Campfire Thunder ForestFire Network represents a set of conditional independence assertions: Each node is asserted to be conditionally independent of its nondescendants, given its immediate predecessors. Directed acyclic graph 36

Bayesian Belief Network Storm BusTourGroup S,B S, B S,B S, B Lightning Campfire C C 0.4 0.6 0.1 0.9 0.8 0.2 0.2 0.8 Campfire Thunder ForestFire Represents joint probability distribution over all variables e.g., P (Storm,BusTourGroup,...,ForestF ire) in general, P (y 1,...,y n )= n P (y i P arents(y i )) i=1 where P arents(y i ) denotes immediate predecessors of Y i in graph so, joint distribution is fully defined by graph, plus the P (y i P arents(y i )) 37

Inference in Bayesian Networks How can one infer the (probabilities of) values of one or more network variables, given observed values of others? Bayes net contains all information needed for this inference If only one variable with unknown value, easy to infer it In general case, problem is NP hard In practice, can succeed in many cases Exact inference methods work well for some network structures Monte Carlo methods simulate the network randomly to calculate approximate solutions 38

Learning of Bayesian Networks Several variants of this learning task Network structure might be known or unknown Training examples might provide values of all network variables, or just some If structure known and observe all variables Then it s easy as training a Naive Bayes classifier 39

Learning Bayes Nets Suppose structure known, variables partially observable e.g., observe ForestFire, Storm, BusTourGroup, Thunder, but not Lightning, Campfire... Similar to training neural network with hidden units In fact, can learn network conditional probability tables using gradient ascent! Converge to network h that (locally) maximizes P (D h) 40

Gradient Ascent for Bayes Nets Let w ijk denote one entry in the conditional probability table for variable Y i in the network w ijk = P (Y i = y ij P arents(y i ) = the list u ik of values) e.g., if Y i = Campfire, thenu ik might be Storm = T, BusTourGroup = F Perform gradient ascent by repeatedly 1. update all w ijk using training data D w ijk w ijk + η d D 2. then, renormalize the w ijk to assure j w ijk =1 0 w ijk 1 P h (y ij,u ik d) w ijk 41

More on Learning Bayes Nets EM algorithm can also be used. Repeatedly: 1. Calculate probabilities of unobserved variables, assuming h 2. Calculate new w ijk to maximize E[ln P (D h)] where D now includes both observed and (calculated probabilities of) unobserved variables When structure unknown... Algorithms use greedy search to add/subtract edges and nodes Active research topic 42

Summary: Bayesian Belief Networks Combine prior knowledge with observed data Impact of prior knowledge (when correct!) is to lower the sample complexity Active research area Extend from boolean to real-valued variables Parameterized distributions instead of tables Extend to first-order instead of propositional systems More effective inference methods... 43

Expectation Maximization (EM) When to use: Data is only partially observable Unsupervised clustering (target value unobservable) Supervised learning (some instance attributes unobservable) Some uses: Train Bayesian Belief Networks Unsupervised clustering (AUTOCLASS) Learning Hidden Markov Models 44

Generating Data from Mixture of k Gaussians p(x) Each instance x generated by 1. Choosing one of the k Gaussians with uniform probability 2. Generating an instance at random according to that Gaussian x 45

EM for Estimating k Means Given: Instances from X generated by mixture of k Gaussian distributions Unknown means µ 1,...,µ k of the k Gaussians Don t know which instance x i was generated by which Gaussian Determine: Maximum likelihood estimates of µ 1,...,µ k Think of full description of each instance as y i = x i,z i1,z i2, where z ij is1ifx i generated by jth Gaussian x i observable z ij unobservable 46

EM for Estimating k Means EM Algorithm: Pick random initial h = µ 1,µ 2,then iterate E step: Calculate the expected value E[z ij ] of each hidden variable z ij, assuming the current hypothesis h = µ 1,µ 2 holds. E[z ij ]= = p(x = x i µ = µ j ) 2 n=1 p(x = x i µ = µ n ) e 1 2σ 2(x i µ j ) 2 2 n=1 e 1 2σ 2(x i µ n ) 2 M step: Calculate a new maximum likelihood hypothesis h = µ 1,µ 2, assuming the value taken on by each hidden variable z ij is its expected value E[z ij ] calculated above. Replace h = µ 1,µ 2 by h = µ 1,µ 2. µ j m i=1 E[z ij ] x i m i=1 E[z ij ] 47

EM Algorithm Converges to local maximum likelihood h and provides estimates of hidden variables z ij In fact, local maximum in E[ln P (Y h)] Y is complete (observable plus unobservable variables) data Expected value is taken over possible values of unobserved variables in Y 48

General EM Problem Given: Observed data X = {x 1,...,x m } Unobserved data Z = {z 1,...,z m } Parameterized probability distribution P (Y h), where Y = {y 1,...,y m } is the full data y i = x i z i h are the parameters Determine: h that (locally) maximizes E[ln P (Y h)] Many uses: Train Bayesian belief networks Unsupervised clustering (e.g., k means) Hidden Markov Models 49

General EM Method Define likelihood function Q(h h) which calculates Y = X Z using observed X and current parameters h to estimate Z EM Algorithm: Q(h h) E[ln P (Y h ) h, X] Estimation (E) step: Calculate Q(h h) usingthe current hypothesis h and the observed data X to estimate the probability distribution over Y. Q(h h) E[ln P (Y h ) h, X] Maximization (M) step: Replace hypothesis h by the hypothesis h that maximizes this Q function. h argmax h Q(h h) 50

Bayesian Learning: Summary Bayes Theorem MAP, ML hypotheses MAP learners Minimum description length principle Bayes optimal classifier Naive Bayes learner Example: Learning over text data Bayesian belief networks Expectation Maximization algorithm 51