Bayesian Learning. Two Roles for Bayesian Methods. Bayes Theorem. Choosing Hypotheses

Similar documents
Notes on Machine Learning for and

CSCE 478/878 Lecture 6: Bayesian Learning

Two Roles for Bayesian Methods

Stephen Scott.

Lecture 9: Bayesian Learning

Bayesian Learning. Examples. Conditional Probability. Two Roles for Bayesian Methods. Prior Probability and Random Variables. The Chain Rule P (B)

BAYESIAN LEARNING. [Read Ch. 6] [Suggested exercises: 6.1, 6.2, 6.6]

MODULE -4 BAYEIAN LEARNING

CSCE 478/878 Lecture 6: Bayesian Learning and Graphical Models. Stephen Scott. Introduction. Outline. Bayes Theorem. Formulas

Bayesian Learning. CSL603 - Fall 2017 Narayanan C Krishnan

Bayesian Learning. Chapter 6: Bayesian Learning. Bayes Theorem. Roles for Bayesian Methods. CS 536: Machine Learning Littman (Wu, TA)

Statistical learning. Chapter 20, Sections 1 4 1

Bayesian Learning. Bayesian Learning Criteria

Bayesian Learning. Bayes Theorem. MAP, MLhypotheses. MAP learners. Minimum description length principle. Bayes optimal classier. Naive Bayes learner

Bayesian Learning Features of Bayesian learning methods:

Machine Learning. Bayesian Learning.

Bayesian Learning. Remark on Conditional Probabilities and Priors. Two Roles for Bayesian Methods. [Read Ch. 6] [Suggested exercises: 6.1, 6.2, 6.

Machine Learning (CS 567)

Algorithmisches Lernen/Machine Learning

Naïve Bayes classification

Introduction to Bayesian Learning. Machine Learning Fall 2018

Naïve Bayes classification. p ij 11/15/16. Probability theory. Probability theory. Probability theory. X P (X = x i )=1 i. Marginal Probability

Probability Based Learning

Machine Learning. Bayesian Learning. Acknowledgement Slides courtesy of Martin Riedmiller

Statistical learning. Chapter 20, Sections 1 3 1

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2014

Probabilistic classification CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2016

9/12/17. Types of learning. Modeling data. Supervised learning: Classification. Supervised learning: Regression. Unsupervised learning: Clustering

Bayesian Learning Extension

Naïve Bayes. Jia-Bin Huang. Virginia Tech Spring 2019 ECE-5424G / CS-5824

CS6220: DATA MINING TECHNIQUES

CS6220: DATA MINING TECHNIQUES

Introduction to Bayesian Learning

Bayesian Machine Learning

Statistical Learning. Philipp Koehn. 10 November 2015

Topics. Bayesian Learning. What is Bayesian Learning? Objectives for Bayesian Learning

Bayesian Learning. Artificial Intelligence Programming. 15-0: Learning vs. Deduction

Lecture : Probabilistic Machine Learning

Mathematical Formulation of Our Example

Statistical learning. Chapter 20, Sections 1 3 1

Density Estimation. Seungjin Choi

Uncertainty. Variables. assigns to each sentence numerical degree of belief between 0 and 1. uncertainty

Introduction to Machine Learning. Maximum Likelihood and Bayesian Inference. Lecturers: Eran Halperin, Lior Wolf

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2016

Lecture 4: Probabilistic Learning. Estimation Theory. Classification with Probability Distributions

Machine Learning. Lecture 4: Regularization and Bayesian Statistics. Feng Li.

Bayesian Methods: Naïve Bayes

Machine Learning. Bayesian Learning. Michael M. Richter. Michael M. Richter

The Naïve Bayes Classifier. Machine Learning Fall 2017

Machine Learning

Graphical Models and Kernel Methods

Latent Variable Models

Bayesian Learning. HT2015: SC4 Statistical Data Mining and Machine Learning. Maximum Likelihood Principle. The Bayesian Learning Framework

Graphical Models for Collaborative Filtering

Learning with Probabilities

Stochastic Gradient Descent

Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen. Bayesian Learning. Tobias Scheffer, Niels Landwehr

Bayesian Networks BY: MOHAMAD ALSABBAGH

CS 446 Machine Learning Fall 2016 Nov 01, Bayesian Learning

Classification CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2012

Notes on Machine Learning for and

Midterm Review CS 6375: Machine Learning. Vibhav Gogate The University of Texas at Dallas

Clustering K-means. Clustering images. Machine Learning CSE546 Carlos Guestrin University of Washington. November 4, 2014.

Logistic Regression. Jia-Bin Huang. Virginia Tech Spring 2019 ECE-5424G / CS-5824

Lecture 4: Probabilistic Learning

T Machine Learning: Basic Principles

Introduction to Machine Learning

Bayesian Inference. Definitions from Probability: Naive Bayes Classifiers: Advantages and Disadvantages of Naive Bayes Classifiers:

Probabilistic Graphical Models

Gaussian Mixture Models

[read Chapter 2] [suggested exercises 2.2, 2.3, 2.4, 2.6] General-to-specific ordering over hypotheses

Introduction to Machine Learning

Bayesian Learning (II)

Introduction to Machine Learning. Maximum Likelihood and Bayesian Inference. Lecturers: Eran Halperin, Yishay Mansour, Lior Wolf

Lecture 2: Simple Classifiers

Midterm, Fall 2003

Bayesian Approaches Data Mining Selected Technique

Uncertainty and Bayesian Networks

CMPT Machine Learning. Bayesian Learning Lecture Scribe for Week 4 Jan 30th & Feb 4th

Expectation Maximization (EM)

Lecture 3: Pattern Classification

COMP 328: Machine Learning

Expectation Maximization, and Learning from Partly Unobserved Data (part 2)

Representation. Stefano Ermon, Aditya Grover. Stanford University. Lecture 2

ECE521 week 3: 23/26 January 2017

Machine Learning

10-701/15-781, Machine Learning: Homework 4

CSCI-567: Machine Learning (Spring 2019)

Probabilistic Graphical Models

LEARNING WITH BAYESIAN NETWORKS

Logistics. Naïve Bayes & Expectation Maximization. 573 Schedule. Coming Soon. Estimation Models. Topics

Midterm Review CS 7301: Advanced Machine Learning. Vibhav Gogate The University of Texas at Dallas

Outline. Training Examples for EnjoySport. 2 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997

Machine Learning, Fall 2012 Homework 2

{ p if x = 1 1 p if x = 0

A brief introduction to Conditional Random Fields

STA 4273H: Statistical Machine Learning

CSE 473: Artificial Intelligence Autumn Topics

Probabilistic Classification

Introduction: MLE, MAP, Bayesian reasoning (28/8/13)

Transcription:

Bayesian Learning Two Roles for Bayesian Methods Probabilistic approach to inference. Quantities of interest are governed by prob. dist. and optimal decisions can be made by reasoning about these prob. Learning algorithms that directly deal with probabilities. Analysis framework for non-probabilistic methods. Provides practical learning algorithms: Naive Bayes learning Bayesian belief network learning Combine prior knowledge (prior probabilities) with observed data Requires prior probabilities Provides useful conceptual framework Provides gold standard for evaluating other learning algorithms Additional insight into Occam s razor 2 Bayes Theorem Choosing Hypotheses P (h D) P (D h)p (h) P (h D) P (D h)p (h) P (h) prior probability that h holds, before seeing the training data prior probability of observing training data D P (D h) probability of observing D in a world where h holds P (h D) probability of h holding given observed data D Generally want the most probable hypothesis given the training data Maximum a posteriori hypothesis h MAP : h MAP arg max P (h D) arg max P (D h)p (h) arg max P (D h)p (h) 3 4

Choosing Hypotheses If all hypotheses are equally probable a priori: then, h MAP reduces to: P (h i ) P (h j ), h i, h j, h ML argmax P (D h). Maximum Likelihood hypothesis. 5 Does patient have cancer or not? Bayes Theorem: Example A patient takes a lab test and the result comes back positive. The test returns a correct positive result in only 98% of the cases in which the disease is actually present, and a correct negative result in only 97% of the cases in which the disease is not present. Furthermore,.008 of the entire population have this cancer. P (cancer) P ( cancer) P ( cancer) P ( cancer) P ( cancer) P ( cancer) How does P (cancer ) compare to P ( cancer )? (What is h MAP? 6 Basic Probability Formulas Product Rule: probability P (A B) of a conjunction of two events A and B: P (A B) P (A B)P (B) P (B A)P (A) Sum Rule: probability of a disjunction of two events A and B: P (A B) P (A) + P (B) P (A B) Theorem of total P probability: if events A,..., A n are mutually n exclusive with P (A i), then nx P (B) P (B A i )P (A i ) Brute Force MAP Hypothesis Learner. For each hypothesis h in H, calculate the posterior probability P (h D) P (D h)p (h) 2. Output the hypothesis h MAP with the highest posterior probability h MAP argmax P (h D) 7 8

Relation to Concept Learning Concept Learning: Assumptions Consider our usual concept learning task instance space X, hypothesis space H, training examples D consider the FindS learning algorithm (outputs most specific hypothesis from the version space V S H,D ) What would Bayes rule produce as the MAP hypothesis? Does FindS output a MAP hypothesis?? Assumptions. Training data D is noise free. 2. Target concept c is contained in hypothesis space H. 3. No a priori reason to believe any hypothesis h i is more probable than any other. P (h) H, h H 9 0 Concept Learning: P (D h) P (D h): probability of observing target values D d, d 2,..., d n for the fixed set of instances x, x 2,..., x n, given a world in which h holds. I.e., h is the correct description of the target concept c (h(x) c(x)). So, there are only two possibilities: P (D h) if h is consistent with D P (D h) 0 otherwise Concept Learning: Use the theorem of total probability: X h i H X P (D h i )P (h i ) h i V S H,D X h i V S H,D H + H X h i / V S H,D 0 H V S H,D. () H 2

Concept Learning: Applying Bayes Rule In case h is inconsistent with D: P (h D) P (D h)p (h) In case h is consistent with D: P (h D) P (D h)p (h) H V S H,D H 3 0 P (h) H V S H,D. 0 Relation to Concept Learning: Summary Assume fixed set of instances x,..., x m Assume D is the set of classifications D c(x ),..., c(x m ) Choose P (D h) P (D h) if h consistent with D P (D h) 0 otherwise Choose P (h) to be uniform distribution P (h) for all h in H H Then, 8 >< P (h D) >: V S H,D 0 otherwise if h is consistent with D Every consistent hypothesis is a MAP hypothesis! 4 Evolution of P (h D,...) Find-S: Consistent Learner P( h ) P(h D) P(h D,D2) Every consistent learner generates a MAP hypothesis. hypotheses hypotheses hypotheses ( a) ( b) ( c) As more data sets are observed, the posterior probability of consistent hypotheses increase. and H V S H,D H > V S H,D Since Find-S is a consistent learner (when data set is noise free), it produces a MAP hypothesis. Even though Find-S does not deal with probability at all, a Bayesian analysis provides a way to characterize the behavior of the algorithm. Also, by identifying P (h) and P (D H), we can characterize implicit assumptions under which the algorithm behaves optimally. In (b), hypotheses inconsistent with dataset D 2 get excluded, and so on in (c). 5 6

Learning A Real Valued Function Setting up the Stage y f h ML e x Consider any real-valued target function f Training examples x i, d i, where d i is noisy training value d i f(x i ) + e i e i is random variable (noise) drawn independently for each x i according to some Gaussian distribution with mean0 Then the maximum likelihood hypothesis h ML is the one that minimizes the sum of squared errors: h ML arg min 7 (d i h(x i )) 2 Probability density function: ML hypothesis p(x 0 ) lim ɛ 0 ɛ P (x 0 x < x 0 + ɛ) h ML argmax p(d h) Training instances x,..., x m and target values d,..., d m, where d i f(x i ) + e i. Assume training examples are mutually independent given h, h ML argmax p(d i h) Note: p(a, b c) p(a b, c) p(b c) p(a c) p(b c) 8 Derivation of ML for Func. Approx. From h ML argmax Q m p(d i h): Since d i f(x i ) + e i and e i N (0, σ 2 ), it must be: d i N (f(x i ), σ 2 ). x N (µ, σ 2 ) means random variable x is normally distributed with mean µ and variance σ 2. Using pdf of N : h ML argmax h ML argmax 9 2 2σ 2. (d i µ) 2πσ 2 e 2 2σ 2. (d i h(x i )) 2πσ 2 e h ML argmax Get rid of constant factor Derivation of ML h ML argmax (d i h(x i ))2 2πσ 2 e 2σ 2., and put on log: 2πσ2 argmax argmax argmin 20 ln e (d i h(x i ))2 2σ 2 ln e (d i h(x i ))2 2σ 2 (d i h(x i )) 2 2σ 2 (d i h(x i )) 2 (2)

Assumptions Least Square as ML Observed training values d i generated by adding random noise to true target value, where noise has a normal distribution with zero mean. All hypotheses are equally probable (uniform prior). Limitations Note: it is possible that MAP ML! Possible noise in x i not accounted for. 2 Learning to Predict Probabilities Consider predicting survival probability from patient data. Training examples x i, d i, where d i is or 0. Want to train network to output a probability given x i (not 0 or ). In this case we can show: h ML argmax d i ln h(x i ) + ( d i ) ln( h(x i )) Weight update rule for a sigmoid unit: where w jk η w jk w jk + w jk (d i h(x i )) x ijk 22 Learning to Predict Probabilities: P (D h) First start with P (D h), given D { x, d,... x m, d m }. P (D h) P (x i, d i h) Assuming P (x i h) P (x i ): P (D h) P (x i, d i h) P (d i h, x i )P (x i h) P (d i h, x i )P (x i ). (3) Note: P (A, B C) P (A B, C)P (B C) 23 Learning to Predict Probabilities: P (D h) h is the probability of d i given the sample x i, thus: P (d i h, x i ) h(x i ) if d i P (d i h, x i ) h(x i ) if d i 0 Rewriting the above: Thus: P (d i h, x i ) h(x i ) d i ( h(x i )) d i P (D h) P (d i h, x i )P (x i ) h(x i ) d i ( h(x i )) d i P (x i ) 24

Learning to Predict Probabilities: h ML Learning to Predict Probabilities: Gradient Descent Letting G(h, D) h ML, and putting in a neural network with a sigmoid output unit h(x i ): h ML argmax argmax h(x i ) d i ( h(x i )) d i P (x i ) h(x i ) d i ( h(x i )) d i (4) since P (x i ) is independent of h. Finally, taking ln: h ML argmax d i ln h(x i ) + ( d i ) ln( h(x i )). Note the similarity of the above to entropy (turn it into argmin, and compare to P i p i log 2 p i ). G(h, D) w jk G(h, D) h(x i ) h(x i ) w jk P m p dp ln h(xp) + ( dp) ln( h(xp)) h(x i ) h(x i ) w jk d i ln h(x i ) + ( d i ) ln( h(x i )) h(x i ) h(x i ) w jk d i h(x i ) h(x i ) h(x i )( h(x i )) w jk d i h(x i ) σ (x i )x ijk h(x i )( h(x i )) (d i h(x i ))x ijk 25 Note: d ln(x) dx x, and σ (x i ) h(x i )( h(x i )). 26 Learning Probabilities: Weight Update Minimum Description Length We want to maximize (not miminize), thus G(h, D) w jk η η w jk (d i h(x i ))x ik w jk w jk + w jk Following the above rule will produce (local minima in) h ML. Compare to backpropagation! Occam s razor: prefer the shortest hypothesis. h MAP argmax P (D h)p (h) h MAP argmax log 2 P (D h) + log 2 P (h) h MAP argmin log 2 P (D h) log 2 P (h) Surprisingly, the above can be interpreted as h MAP preferring shorter hypotheses, assuming a particular encoding scheme is used for the hypothesis and the data. According to information theory, the shortest code length for a message occurring with probability p i is log 2 p i bits. 27 28

MDL MDL h MAP argmin log 2 P (D h) log 2 P (h) L C (i): description length of message i with respect to code C. log 2 P (h): description length of h under optimal coding C H for the hypothesis space H. L CH (h) log 2 P (h) log 2 P (D h): description length of training data D given hypothesis h, under optimal encoding C D H. Finally, we get: L CD H (D h) log 2 P (D h) MAP: h MAP argmin L CD H (D h) + L CH (h) MDL: Choose h MDL such that: h MDL argmin L C (h) + L C2 (D h) which is the hypothesis that minimizes the combined length of the hypotheis itself, and the data described by the hypothesis. h MDL h MAP if C C H and C 2 C D H. h MAP argmin L CD H (D h) + L CH (h) 29 30 Bayes Optimal Classifier Bayes Optimal Classification What is the most probable hypothesis given the training data, vs. What is the most probable classification? Example: P (h D) 0.4, P (h 2 D) 0.3, P (h 3 D) 0.3. Given a new instance x, h (x), h 2 (x) 0, h (x) 0. In this case, probability of x being positive is only 0.4. If a new instance can take classification v j V, then the probability P (v j D) of correct classification of new instance being v j is: P (v j D) X P (v j h i )P (h i D) h i H Thus, the optimal classification is argmax X h i H P (v j h i )P (h i D). 3 32

Bayes Optimal Classifier What is the assumption for the following to work? P (v j D) X Let s consider H {h, h}: h i H P (v j h i )P (h i D) P (v D) P (v, h D) + P (v, h D) P (v, h, D) P (v, h, D) + P (v h, D)P (h D) P (v h, D)P ( h D) + {if P (v h, D) P (v h), etc.} P (v h)p (h D) + P (v h)p ( h D) 33 Bayes Optimal Classifier: Example P (h D) 0.4, P (h 2 D) 0.3, P (h 3 D) 0.3. Given a new instance x, h (x), h 2 (x) 0, h (x) 0. P ( h ) 0, P ( h ), etc. P ( D) 0.4 + 0 + 0, P ( D) 0 + 0.3 + 0.3 0.6 Thus, argmax v O{, } P (v D). Bayes optimal classifiers maximize the probability that a new instance is correctly classified, given the available data, hypothesis space H, and prior probabilities over H. Some oddities: The resulting hypotheis can be outside of the hypothesis space. 34 Gibbs Sampling Finding argmax v V P (v D) by considering every hypothesis h H can be infeasible. A less optimal, but error-bounded version is Gibbs sampling:. Randomly pick h H with probability P (h D). 2. Use h to classify the new instance x. The result is that missclassification rate is at most 2 that of BOC. Example: In concept learning, if h has a uniform prior, then randomly picking any h from the version space will result in expected error of at most 2 that of BOC. Naive Bayes Classifier Given attribute values a, a 2,..., a n, give the classification v V : v MAP argmax P (v j a, a 2,..., a n ) v MAP argmax P (a, a 2,..., a n v j )P (v j ) P (a, a 2,..., a n ) argmax P (a, a 2,..., a n v j )P (v j ) Want to estimate P (a, a 2,..., a n v j ) and P (v j ) from training data. 35 36

Naive Bayes P (v j ) is easy to calculate: Just count the frequency. P (a, a 2,..., a n v j ) takes the number of posible instances number of possible target values. P (a, a 2,..., a n v j ) can be approximated as P (a, a 2,..., a n v j ) Y P (a i v j ). i Naive Bayes Learn(examples) For each target value v j Naive Bayes Algorithm ˆP (v j ) estimate P (v j ) For each attribute value a i of each attribute a ˆP (a i v j ) estimate P (a i v j ) From this naive Bayes classifier is defined as: v NB argmax P (v j ) Y i P (a i v j ) Naive Bayes only takes number of distinct attribute values number of distinct target values. Classify New Instance(x) v NB argmax ˆP (v j ) Y i ˆP (x i v j ) 37 38 Naive Bayes: Example Consider PlayTennis again, and new instance: Naive Bayes: Subtleties. Conditional independence assumption is often violated x Outlk sun, T emp cool, Humid high, W ind strong P (a, a 2... a n v j ) Y i P (a i v j ) Want to compute: V {Y es, No} v NB argmax P (v j ) Y i P (x i v j )...but it works surprisingly well anyway. Note don t need estimated posteriors ˆP (v j x) to be correct; need only that argmax ˆP (v j ) Y i ˆP (a i v j ) argmax P (v j )P (a..., a n v j ) Naive Bayes posteriors often unrealistically close to or 0. P (Y ) P (sun Y ) P (cool Y ) P (high Y ) P (strong Y ).005 P (N) P (sun N) P (cool N) P (high N) P (strong N).02 Thus, v NB No 39 40

Naive Bayes: Subtleties What if none of the training instances with target value v j have attribute value a i? Then ˆP (a i v j ) 0, and... Conditional Independence Definition: X is conditionally independent of Y given Z if the probability distribution governing X is independent of the value of Y given the value of Z; that is, if ˆP (v j ) Y i ˆP (a i v j ) 0 ( x i, y j, z k ) P (X x i Y y j, Z z k ) P (X x i Z z k ) Typical solution is Bayesian estimate for ˆP (a i v j ) ˆP (a i v j ) n c + mp n + m more compactly, we write P (X Y, Z) P (X Z) where n is number of training examples for which v v j, n c number of examples for which v v j and a a i p is prior estimate for ˆP (a i v j ) m is weight given to prior (i.e. number of virtual examples) Example: T hunder is conditionally independent of Rain, given Lightning P (T hunder Rain, Lightning) P (T hunder Lightning) 4 42 Naive Bayes uses cond. indep. to justify Bayesian Belief Network P (X, Y Z) P (X Y, Z)P (Y Z) Bayesian Belief P (X Z)P Network (Y Z) Storm BusTourGroup Storm BusTourGroup S,B S, B S,B S, B S,B S, B S,B S, B Lightning Campfire C C 0.4 0.6 0. 0.9 0.8 0.2 0.2 0.8 Lightning Campfire C C 0.4 0.6 0. 0.9 0.8 0.2 0.2 0.8 Campfire Campfire Thunder ForestFire Thunder ForestFire Network represents a set of conditional independence assertions: Each node is asserted to be conditionally independent of its nondescendants, given its immediate predecessors. Directed acyclic graph. Each node has a conditional probability table: P (N ode P arents(n ode)). BBN represents the joint probability P (N, N 2,...) in a compact form. 43 Represents joint probability distribution over all variables e.g., P (Storm, BusT ourgroup,..., F orestf ire) in general, P (Y y,..., Y n y n ) ny P (Y i y i P arents(y i )) where P arents(y i ) denotes immediate predecessors of Y i in graph having the y values specified on the left. So, joint distribution is fully defined by graph, plus the P (y i P arents(y i )) 44

Inference in Bayesian Networks Monte Carlo for Inference in BBN How can one infer the (probabilities of) values of one or more network variables, given observed values of others? Bayes net contains all the information needed for this inference. If only one variable with unknown value, easy to infer it. In general case, problem is NP hard. In practice, can succeed in many cases: Exact inference methods work well for some network structures. Monte Carlo methods simulate the network randomly to calculate approximate solutions. Want to calculate and arbitraty conditional probability.. Generate many random samples based on the given BBN. (a) Sample from P (Storm) and P (BusT ourgroup). (b) Based on the outcome of previous step outcome, sample from P (Lightening Storm outcome ), P (Campf ire Strom, BusT ourgroup outcome ), etc. (c) Combine all the outcomes to form a single sample vector. 2. Estimate the particular conditional probability based on the samples you generated. 45 46 Learning of Bayesian Networks Learning Bayes Nets Several variants of this learning task Network structure might be known or unknown Training examples might provide values of all network variables, or just some If structure known and observe all variables Then it s easy as training a Naive Bayes classifier Suppose structure known, variables partially observable e.g., observe ForestFire, Storm, BusTourGroup, Thunder, but not Lightning, Campfire... Similar to training neural network with hidden units In fact, can learn network conditional probability tables using gradient ascent! Converge to network h that (locally) maximizes P (D h) 47 48

Gradient Ascent for Bayes Nets Parents of Y i U i {A, B,..., N} A B.... N U i {u i, u i2,..., u ik,...} u ik are vectors. Gradient Ascent for Bayes Nets Parents of Y i U i {A, B,..., N} A B.... N U i {u i, u i2,..., u ik,...} u ik are vectors. w ijk P(y ij u ik ) w ijk P(y ij u ik ) Y i k Y i Y i {y i,... y ij... } y ij are scalars. k j w ijk conditional prob table for Y i Perform gradient ascent Y i {y i,... y ij... } y ij are scalars. ln P (D h) w ijk j w ijk conditional prob table for Y i by repeatedly Let w ijk denote one entry in the conditional probability table for variable Y i in the network w ijk P (Y i y ij P arents(y i ) the list u ik of values) e.g., if Y i Campfire, then u ik might be Storm T, BusT ourgroup F 49. update all w ijk using training data D (P h ( ) means the probability given the current BBN h): w ijk w ijk + η X d D P h (Y i y ij, U i u ik d) w ijk 2. then, renormalize the w ijk to assure: P j w ijk and 0 w ijk. 50 Derivation of BN Gradient Ascent Derivation of BN Gradient Ascent ln P (D h) ln P (D h) w ijk w ijk ln Y P h (d) w ijk d D X P h (d y ij, u ik )P h (u ik ) d D P h (d) X ln P h (d) d D w ijk X P h (y ij, u ik d)p h (d)p h (u ik ) d D P h (d) P h (y ij, u ik ) X P h (d) d D P h (d) w ijk X X P h (d y ij, u ik )P h (y ij u ik )P h (u ik ) d D P h (d) w ijk j,k X X P h (d y ij, u ik )w ij k P h (u ik ) d D P h (d) w ijk j,k X P h (d y ij, u ik )w ijk P h (u ik ) d D P h (d) w ijk X P h (y ij, u ik d)p h (u ik ) d D P h (y ij, u ik ) X P h (y ij, u ik d)p h (u ik ) d D P h (y ij u ik )P h (u ik ) X P h (y ij, u ik d) d D P h (y ij u ik ) X P h (y ij, u ik d) d D w ijk 5 52

Expectation Maximization (EM) When to use: Data is only partially observable Unsupervised clustering (target value unobservable) Supervised learning (some instance attributes unobservable) Some uses: Train Bayesian Belief Networks Unsupervised clustering (AUTOCLASS) Learning Hidden Markov Models EM for Estimating k Means Given: Instances from X generated by mixture of k Gaussian distributions Unknown means µ,..., µ k of the k Gaussians Don t know which instance x i was generated by which Gaussian Determine: Maximum likelihood estimates of µ,..., µ k Think of full description of each instance as y i x i, z i, z i2, where z ij is if x i generated by jth Gaussian x i observable z ij unobservable 53 54 EM for Estimating k Means EM Algorithm: Pick random initial h µ, µ 2, then iterate E step: Calculate the expected value E[z ij ] of each hidden variable z ij, assuming the current hypothesis h µ, µ 2 holds. E[z ij ] p(x x i µ µ j ) P 2 n p(x x i µ µ n ) e 2σ 2 (x i µ j )2 P 2 n e 2σ 2 (x i µ n) 2 M step: Calculate a new maximum likelihood hypothesis h µ, µ 2, assuming the value taken on by each hidden variable z ij is its expected value E[z ij ] calculated above. Replace h µ, µ 2 by h µ, µ 2. EM Algorithm Converges to local maximum likelihood h and provides estimates of hidden variables z ij In fact, local maximum in E[ln P (Y h)] Y is complete (observable plus unobservable variables) data Expected value is taken over possible values of unobserved variables in Y µ j P m E[z ij] x i P m E[z ij] 55 56

General EM Problem General EM Method Given: Observed data X {x,..., x m } Unobserved data Z {z,..., z m } Parameterized probability distribution P (Y h), where Y {y,..., y m } is the full data y i x i z i h are the parameters Determine: h that (locally) maximizes E[ln P (Y h)] Define likelihood function Q(h h) which calculates Y X Z using observed X and current parameters h to estimate Z EM Algorithm: Q(h h) E[ln P (Y h ) h, X] Estimation (E) step: Calculate Q(h h) using the current hypothesis h and the observed data X to estimate the probability distribution over Y. Q(h h) E[ln P (Y h ) h, X] Maximization (M) step: Replace hypothesis h by the hypothesis h that maximizes this Q function. h argmax Q(h h) h 57 58 Derivation of k-means Hypothesis h is parameterized by θ µ...µ k. Observed data X { x i } Hidden variables Z { z i,..., z ik }: z ik if input x i is generated by th k-th normal dist. For each input, k entries. First, start with defining ln p(y h). Deriving ln P (Y h) p(y i h ) p(x i, z i, z i2,..., z ik h ) P kj p 2πσ 2 e 2σ 2 z ij (x i µ j )2 Note that the vector z i,..., z ik contains only a single and all the rest are 0. ln P (Y h ) ln p(y i h ) ln p(y i h ) 0 @ln 2πσ 2 2σ 2 kx z ij (x i µ j )2 A j 59 60

Deriving E[ln P (Y h)] Since P (Y h ) is a linear function of z ij, and since E[f(z)] f(e[z]), 2 0 E[ln P (Y h )] E 4 @ln 0 @ln Thus, p2πσ 2 p2πσ 2 3 kx 2σ 2 z ij (x i µ j )2 A5 j kx 2σ 2 E[z ij ](x i µ j )2 A j With Finding argmax h Q(h h) E[z ij ] e 2σ 2 (x i µ j )2 P 2 n e 2σ 2 (x i µ n) 2 we want to find h such that 0 argmax h Q(h h) argmax @ln h p2πσ 2 argmin h kx E[z ij ](x i µ j )2, j kx 2σ 2 E[z ij ](x i µ j )2 A j Q(h h) Q( µ,..., µ k 0 h) @ln 2πσ 2 2σ 2 kx E[z ij ](x i µ j )2 A j which is minimized by µ j P m E[z ij]x i P m E[z. ij] 6 62 Deriving the Update Rule Set the derivative of the quantity to be minimized to be zero: µ j kx E[z ij ](x i µ j )2 j µ j E[z ij ](x i µ j )2 2 E[z ij ](x i µ j ) 0 E[z ij ]x i E[z ij ]µ j 0 E[z ij ]x i µ j E[z ij ] µ j P m E[z ij ]x i P m E[z ij ] See Bishop (995) Neural Networks for Pattern Recognition, Oxford U Press. pp. 63 64. 63