Bayesian Learning. Two Roles for Bayesian Methods. Bayes Theorem. Choosing Hypotheses

Size: px
Start display at page:

Download "Bayesian Learning. Two Roles for Bayesian Methods. Bayes Theorem. Choosing Hypotheses"

Transcription

1 Bayesian Learning Two Roles for Bayesian Methods Probabilistic approach to inference. Quantities of interest are governed by prob. dist. and optimal decisions can be made by reasoning about these prob. Learning algorithms that directly deal with probabilities. Analysis framework for non-probabilistic methods. Provides practical learning algorithms: Naive Bayes learning Bayesian belief network learning Combine prior knowledge (prior probabilities) with observed data Requires prior probabilities Provides useful conceptual framework Provides gold standard for evaluating other learning algorithms Additional insight into Occam s razor 2 Bayes Theorem Choosing Hypotheses P (h D) P (D h)p (h) P (h D) P (D h)p (h) P (h) prior probability that h holds, before seeing the training data prior probability of observing training data D P (D h) probability of observing D in a world where h holds P (h D) probability of h holding given observed data D Generally want the most probable hypothesis given the training data Maximum a posteriori hypothesis h MAP : h MAP arg max P (h D) arg max P (D h)p (h) arg max P (D h)p (h) 3 4

2 Choosing Hypotheses If all hypotheses are equally probable a priori: then, h MAP reduces to: P (h i ) P (h j ), h i, h j, h ML argmax P (D h). Maximum Likelihood hypothesis. 5 Does patient have cancer or not? Bayes Theorem: Example A patient takes a lab test and the result comes back positive. The test returns a correct positive result in only 98% of the cases in which the disease is actually present, and a correct negative result in only 97% of the cases in which the disease is not present. Furthermore,.008 of the entire population have this cancer. P (cancer) P ( cancer) P ( cancer) P ( cancer) P ( cancer) P ( cancer) How does P (cancer ) compare to P ( cancer )? (What is h MAP? 6 Basic Probability Formulas Product Rule: probability P (A B) of a conjunction of two events A and B: P (A B) P (A B)P (B) P (B A)P (A) Sum Rule: probability of a disjunction of two events A and B: P (A B) P (A) + P (B) P (A B) Theorem of total P probability: if events A,..., A n are mutually n exclusive with P (A i), then nx P (B) P (B A i )P (A i ) Brute Force MAP Hypothesis Learner. For each hypothesis h in H, calculate the posterior probability P (h D) P (D h)p (h) 2. Output the hypothesis h MAP with the highest posterior probability h MAP argmax P (h D) 7 8

3 Relation to Concept Learning Concept Learning: Assumptions Consider our usual concept learning task instance space X, hypothesis space H, training examples D consider the FindS learning algorithm (outputs most specific hypothesis from the version space V S H,D ) What would Bayes rule produce as the MAP hypothesis? Does FindS output a MAP hypothesis?? Assumptions. Training data D is noise free. 2. Target concept c is contained in hypothesis space H. 3. No a priori reason to believe any hypothesis h i is more probable than any other. P (h) H, h H 9 0 Concept Learning: P (D h) P (D h): probability of observing target values D d, d 2,..., d n for the fixed set of instances x, x 2,..., x n, given a world in which h holds. I.e., h is the correct description of the target concept c (h(x) c(x)). So, there are only two possibilities: P (D h) if h is consistent with D P (D h) 0 otherwise Concept Learning: Use the theorem of total probability: X h i H X P (D h i )P (h i ) h i V S H,D X h i V S H,D H + H X h i / V S H,D 0 H V S H,D. () H 2

4 Concept Learning: Applying Bayes Rule In case h is inconsistent with D: P (h D) P (D h)p (h) In case h is consistent with D: P (h D) P (D h)p (h) H V S H,D H 3 0 P (h) H V S H,D. 0 Relation to Concept Learning: Summary Assume fixed set of instances x,..., x m Assume D is the set of classifications D c(x ),..., c(x m ) Choose P (D h) P (D h) if h consistent with D P (D h) 0 otherwise Choose P (h) to be uniform distribution P (h) for all h in H H Then, 8 >< P (h D) >: V S H,D 0 otherwise if h is consistent with D Every consistent hypothesis is a MAP hypothesis! 4 Evolution of P (h D,...) Find-S: Consistent Learner P( h ) P(h D) P(h D,D2) Every consistent learner generates a MAP hypothesis. hypotheses hypotheses hypotheses ( a) ( b) ( c) As more data sets are observed, the posterior probability of consistent hypotheses increase. and H V S H,D H > V S H,D Since Find-S is a consistent learner (when data set is noise free), it produces a MAP hypothesis. Even though Find-S does not deal with probability at all, a Bayesian analysis provides a way to characterize the behavior of the algorithm. Also, by identifying P (h) and P (D H), we can characterize implicit assumptions under which the algorithm behaves optimally. In (b), hypotheses inconsistent with dataset D 2 get excluded, and so on in (c). 5 6

5 Learning A Real Valued Function Setting up the Stage y f h ML e x Consider any real-valued target function f Training examples x i, d i, where d i is noisy training value d i f(x i ) + e i e i is random variable (noise) drawn independently for each x i according to some Gaussian distribution with mean0 Then the maximum likelihood hypothesis h ML is the one that minimizes the sum of squared errors: h ML arg min 7 (d i h(x i )) 2 Probability density function: ML hypothesis p(x 0 ) lim ɛ 0 ɛ P (x 0 x < x 0 + ɛ) h ML argmax p(d h) Training instances x,..., x m and target values d,..., d m, where d i f(x i ) + e i. Assume training examples are mutually independent given h, h ML argmax p(d i h) Note: p(a, b c) p(a b, c) p(b c) p(a c) p(b c) 8 Derivation of ML for Func. Approx. From h ML argmax Q m p(d i h): Since d i f(x i ) + e i and e i N (0, σ 2 ), it must be: d i N (f(x i ), σ 2 ). x N (µ, σ 2 ) means random variable x is normally distributed with mean µ and variance σ 2. Using pdf of N : h ML argmax h ML argmax 9 2 2σ 2. (d i µ) 2πσ 2 e 2 2σ 2. (d i h(x i )) 2πσ 2 e h ML argmax Get rid of constant factor Derivation of ML h ML argmax (d i h(x i ))2 2πσ 2 e 2σ 2., and put on log: 2πσ2 argmax argmax argmin 20 ln e (d i h(x i ))2 2σ 2 ln e (d i h(x i ))2 2σ 2 (d i h(x i )) 2 2σ 2 (d i h(x i )) 2 (2)

6 Assumptions Least Square as ML Observed training values d i generated by adding random noise to true target value, where noise has a normal distribution with zero mean. All hypotheses are equally probable (uniform prior). Limitations Note: it is possible that MAP ML! Possible noise in x i not accounted for. 2 Learning to Predict Probabilities Consider predicting survival probability from patient data. Training examples x i, d i, where d i is or 0. Want to train network to output a probability given x i (not 0 or ). In this case we can show: h ML argmax d i ln h(x i ) + ( d i ) ln( h(x i )) Weight update rule for a sigmoid unit: where w jk η w jk w jk + w jk (d i h(x i )) x ijk 22 Learning to Predict Probabilities: P (D h) First start with P (D h), given D { x, d,... x m, d m }. P (D h) P (x i, d i h) Assuming P (x i h) P (x i ): P (D h) P (x i, d i h) P (d i h, x i )P (x i h) P (d i h, x i )P (x i ). (3) Note: P (A, B C) P (A B, C)P (B C) 23 Learning to Predict Probabilities: P (D h) h is the probability of d i given the sample x i, thus: P (d i h, x i ) h(x i ) if d i P (d i h, x i ) h(x i ) if d i 0 Rewriting the above: Thus: P (d i h, x i ) h(x i ) d i ( h(x i )) d i P (D h) P (d i h, x i )P (x i ) h(x i ) d i ( h(x i )) d i P (x i ) 24

7 Learning to Predict Probabilities: h ML Learning to Predict Probabilities: Gradient Descent Letting G(h, D) h ML, and putting in a neural network with a sigmoid output unit h(x i ): h ML argmax argmax h(x i ) d i ( h(x i )) d i P (x i ) h(x i ) d i ( h(x i )) d i (4) since P (x i ) is independent of h. Finally, taking ln: h ML argmax d i ln h(x i ) + ( d i ) ln( h(x i )). Note the similarity of the above to entropy (turn it into argmin, and compare to P i p i log 2 p i ). G(h, D) w jk G(h, D) h(x i ) h(x i ) w jk P m p dp ln h(xp) + ( dp) ln( h(xp)) h(x i ) h(x i ) w jk d i ln h(x i ) + ( d i ) ln( h(x i )) h(x i ) h(x i ) w jk d i h(x i ) h(x i ) h(x i )( h(x i )) w jk d i h(x i ) σ (x i )x ijk h(x i )( h(x i )) (d i h(x i ))x ijk 25 Note: d ln(x) dx x, and σ (x i ) h(x i )( h(x i )). 26 Learning Probabilities: Weight Update Minimum Description Length We want to maximize (not miminize), thus G(h, D) w jk η η w jk (d i h(x i ))x ik w jk w jk + w jk Following the above rule will produce (local minima in) h ML. Compare to backpropagation! Occam s razor: prefer the shortest hypothesis. h MAP argmax P (D h)p (h) h MAP argmax log 2 P (D h) + log 2 P (h) h MAP argmin log 2 P (D h) log 2 P (h) Surprisingly, the above can be interpreted as h MAP preferring shorter hypotheses, assuming a particular encoding scheme is used for the hypothesis and the data. According to information theory, the shortest code length for a message occurring with probability p i is log 2 p i bits

8 MDL MDL h MAP argmin log 2 P (D h) log 2 P (h) L C (i): description length of message i with respect to code C. log 2 P (h): description length of h under optimal coding C H for the hypothesis space H. L CH (h) log 2 P (h) log 2 P (D h): description length of training data D given hypothesis h, under optimal encoding C D H. Finally, we get: L CD H (D h) log 2 P (D h) MAP: h MAP argmin L CD H (D h) + L CH (h) MDL: Choose h MDL such that: h MDL argmin L C (h) + L C2 (D h) which is the hypothesis that minimizes the combined length of the hypotheis itself, and the data described by the hypothesis. h MDL h MAP if C C H and C 2 C D H. h MAP argmin L CD H (D h) + L CH (h) Bayes Optimal Classifier Bayes Optimal Classification What is the most probable hypothesis given the training data, vs. What is the most probable classification? Example: P (h D) 0.4, P (h 2 D) 0.3, P (h 3 D) 0.3. Given a new instance x, h (x), h 2 (x) 0, h (x) 0. In this case, probability of x being positive is only 0.4. If a new instance can take classification v j V, then the probability P (v j D) of correct classification of new instance being v j is: P (v j D) X P (v j h i )P (h i D) h i H Thus, the optimal classification is argmax X h i H P (v j h i )P (h i D). 3 32

9 Bayes Optimal Classifier What is the assumption for the following to work? P (v j D) X Let s consider H {h, h}: h i H P (v j h i )P (h i D) P (v D) P (v, h D) + P (v, h D) P (v, h, D) P (v, h, D) + P (v h, D)P (h D) P (v h, D)P ( h D) + {if P (v h, D) P (v h), etc.} P (v h)p (h D) + P (v h)p ( h D) 33 Bayes Optimal Classifier: Example P (h D) 0.4, P (h 2 D) 0.3, P (h 3 D) 0.3. Given a new instance x, h (x), h 2 (x) 0, h (x) 0. P ( h ) 0, P ( h ), etc. P ( D) , P ( D) Thus, argmax v O{, } P (v D). Bayes optimal classifiers maximize the probability that a new instance is correctly classified, given the available data, hypothesis space H, and prior probabilities over H. Some oddities: The resulting hypotheis can be outside of the hypothesis space. 34 Gibbs Sampling Finding argmax v V P (v D) by considering every hypothesis h H can be infeasible. A less optimal, but error-bounded version is Gibbs sampling:. Randomly pick h H with probability P (h D). 2. Use h to classify the new instance x. The result is that missclassification rate is at most 2 that of BOC. Example: In concept learning, if h has a uniform prior, then randomly picking any h from the version space will result in expected error of at most 2 that of BOC. Naive Bayes Classifier Given attribute values a, a 2,..., a n, give the classification v V : v MAP argmax P (v j a, a 2,..., a n ) v MAP argmax P (a, a 2,..., a n v j )P (v j ) P (a, a 2,..., a n ) argmax P (a, a 2,..., a n v j )P (v j ) Want to estimate P (a, a 2,..., a n v j ) and P (v j ) from training data

10 Naive Bayes P (v j ) is easy to calculate: Just count the frequency. P (a, a 2,..., a n v j ) takes the number of posible instances number of possible target values. P (a, a 2,..., a n v j ) can be approximated as P (a, a 2,..., a n v j ) Y P (a i v j ). i Naive Bayes Learn(examples) For each target value v j Naive Bayes Algorithm ˆP (v j ) estimate P (v j ) For each attribute value a i of each attribute a ˆP (a i v j ) estimate P (a i v j ) From this naive Bayes classifier is defined as: v NB argmax P (v j ) Y i P (a i v j ) Naive Bayes only takes number of distinct attribute values number of distinct target values. Classify New Instance(x) v NB argmax ˆP (v j ) Y i ˆP (x i v j ) Naive Bayes: Example Consider PlayTennis again, and new instance: Naive Bayes: Subtleties. Conditional independence assumption is often violated x Outlk sun, T emp cool, Humid high, W ind strong P (a, a 2... a n v j ) Y i P (a i v j ) Want to compute: V {Y es, No} v NB argmax P (v j ) Y i P (x i v j )...but it works surprisingly well anyway. Note don t need estimated posteriors ˆP (v j x) to be correct; need only that argmax ˆP (v j ) Y i ˆP (a i v j ) argmax P (v j )P (a..., a n v j ) Naive Bayes posteriors often unrealistically close to or 0. P (Y ) P (sun Y ) P (cool Y ) P (high Y ) P (strong Y ).005 P (N) P (sun N) P (cool N) P (high N) P (strong N).02 Thus, v NB No 39 40

11 Naive Bayes: Subtleties What if none of the training instances with target value v j have attribute value a i? Then ˆP (a i v j ) 0, and... Conditional Independence Definition: X is conditionally independent of Y given Z if the probability distribution governing X is independent of the value of Y given the value of Z; that is, if ˆP (v j ) Y i ˆP (a i v j ) 0 ( x i, y j, z k ) P (X x i Y y j, Z z k ) P (X x i Z z k ) Typical solution is Bayesian estimate for ˆP (a i v j ) ˆP (a i v j ) n c + mp n + m more compactly, we write P (X Y, Z) P (X Z) where n is number of training examples for which v v j, n c number of examples for which v v j and a a i p is prior estimate for ˆP (a i v j ) m is weight given to prior (i.e. number of virtual examples) Example: T hunder is conditionally independent of Rain, given Lightning P (T hunder Rain, Lightning) P (T hunder Lightning) 4 42 Naive Bayes uses cond. indep. to justify Bayesian Belief Network P (X, Y Z) P (X Y, Z)P (Y Z) Bayesian Belief P (X Z)P Network (Y Z) Storm BusTourGroup Storm BusTourGroup S,B S, B S,B S, B S,B S, B S,B S, B Lightning Campfire C C Lightning Campfire C C Campfire Campfire Thunder ForestFire Thunder ForestFire Network represents a set of conditional independence assertions: Each node is asserted to be conditionally independent of its nondescendants, given its immediate predecessors. Directed acyclic graph. Each node has a conditional probability table: P (N ode P arents(n ode)). BBN represents the joint probability P (N, N 2,...) in a compact form. 43 Represents joint probability distribution over all variables e.g., P (Storm, BusT ourgroup,..., F orestf ire) in general, P (Y y,..., Y n y n ) ny P (Y i y i P arents(y i )) where P arents(y i ) denotes immediate predecessors of Y i in graph having the y values specified on the left. So, joint distribution is fully defined by graph, plus the P (y i P arents(y i )) 44

12 Inference in Bayesian Networks Monte Carlo for Inference in BBN How can one infer the (probabilities of) values of one or more network variables, given observed values of others? Bayes net contains all the information needed for this inference. If only one variable with unknown value, easy to infer it. In general case, problem is NP hard. In practice, can succeed in many cases: Exact inference methods work well for some network structures. Monte Carlo methods simulate the network randomly to calculate approximate solutions. Want to calculate and arbitraty conditional probability.. Generate many random samples based on the given BBN. (a) Sample from P (Storm) and P (BusT ourgroup). (b) Based on the outcome of previous step outcome, sample from P (Lightening Storm outcome ), P (Campf ire Strom, BusT ourgroup outcome ), etc. (c) Combine all the outcomes to form a single sample vector. 2. Estimate the particular conditional probability based on the samples you generated Learning of Bayesian Networks Learning Bayes Nets Several variants of this learning task Network structure might be known or unknown Training examples might provide values of all network variables, or just some If structure known and observe all variables Then it s easy as training a Naive Bayes classifier Suppose structure known, variables partially observable e.g., observe ForestFire, Storm, BusTourGroup, Thunder, but not Lightning, Campfire... Similar to training neural network with hidden units In fact, can learn network conditional probability tables using gradient ascent! Converge to network h that (locally) maximizes P (D h) 47 48

13 Gradient Ascent for Bayes Nets Parents of Y i U i {A, B,..., N} A B.... N U i {u i, u i2,..., u ik,...} u ik are vectors. Gradient Ascent for Bayes Nets Parents of Y i U i {A, B,..., N} A B.... N U i {u i, u i2,..., u ik,...} u ik are vectors. w ijk P(y ij u ik ) w ijk P(y ij u ik ) Y i k Y i Y i {y i,... y ij... } y ij are scalars. k j w ijk conditional prob table for Y i Perform gradient ascent Y i {y i,... y ij... } y ij are scalars. ln P (D h) w ijk j w ijk conditional prob table for Y i by repeatedly Let w ijk denote one entry in the conditional probability table for variable Y i in the network w ijk P (Y i y ij P arents(y i ) the list u ik of values) e.g., if Y i Campfire, then u ik might be Storm T, BusT ourgroup F 49. update all w ijk using training data D (P h ( ) means the probability given the current BBN h): w ijk w ijk + η X d D P h (Y i y ij, U i u ik d) w ijk 2. then, renormalize the w ijk to assure: P j w ijk and 0 w ijk. 50 Derivation of BN Gradient Ascent Derivation of BN Gradient Ascent ln P (D h) ln P (D h) w ijk w ijk ln Y P h (d) w ijk d D X P h (d y ij, u ik )P h (u ik ) d D P h (d) X ln P h (d) d D w ijk X P h (y ij, u ik d)p h (d)p h (u ik ) d D P h (d) P h (y ij, u ik ) X P h (d) d D P h (d) w ijk X X P h (d y ij, u ik )P h (y ij u ik )P h (u ik ) d D P h (d) w ijk j,k X X P h (d y ij, u ik )w ij k P h (u ik ) d D P h (d) w ijk j,k X P h (d y ij, u ik )w ijk P h (u ik ) d D P h (d) w ijk X P h (y ij, u ik d)p h (u ik ) d D P h (y ij, u ik ) X P h (y ij, u ik d)p h (u ik ) d D P h (y ij u ik )P h (u ik ) X P h (y ij, u ik d) d D P h (y ij u ik ) X P h (y ij, u ik d) d D w ijk 5 52

14 Expectation Maximization (EM) When to use: Data is only partially observable Unsupervised clustering (target value unobservable) Supervised learning (some instance attributes unobservable) Some uses: Train Bayesian Belief Networks Unsupervised clustering (AUTOCLASS) Learning Hidden Markov Models EM for Estimating k Means Given: Instances from X generated by mixture of k Gaussian distributions Unknown means µ,..., µ k of the k Gaussians Don t know which instance x i was generated by which Gaussian Determine: Maximum likelihood estimates of µ,..., µ k Think of full description of each instance as y i x i, z i, z i2, where z ij is if x i generated by jth Gaussian x i observable z ij unobservable EM for Estimating k Means EM Algorithm: Pick random initial h µ, µ 2, then iterate E step: Calculate the expected value E[z ij ] of each hidden variable z ij, assuming the current hypothesis h µ, µ 2 holds. E[z ij ] p(x x i µ µ j ) P 2 n p(x x i µ µ n ) e 2σ 2 (x i µ j )2 P 2 n e 2σ 2 (x i µ n) 2 M step: Calculate a new maximum likelihood hypothesis h µ, µ 2, assuming the value taken on by each hidden variable z ij is its expected value E[z ij ] calculated above. Replace h µ, µ 2 by h µ, µ 2. EM Algorithm Converges to local maximum likelihood h and provides estimates of hidden variables z ij In fact, local maximum in E[ln P (Y h)] Y is complete (observable plus unobservable variables) data Expected value is taken over possible values of unobserved variables in Y µ j P m E[z ij] x i P m E[z ij] 55 56

15 General EM Problem General EM Method Given: Observed data X {x,..., x m } Unobserved data Z {z,..., z m } Parameterized probability distribution P (Y h), where Y {y,..., y m } is the full data y i x i z i h are the parameters Determine: h that (locally) maximizes E[ln P (Y h)] Define likelihood function Q(h h) which calculates Y X Z using observed X and current parameters h to estimate Z EM Algorithm: Q(h h) E[ln P (Y h ) h, X] Estimation (E) step: Calculate Q(h h) using the current hypothesis h and the observed data X to estimate the probability distribution over Y. Q(h h) E[ln P (Y h ) h, X] Maximization (M) step: Replace hypothesis h by the hypothesis h that maximizes this Q function. h argmax Q(h h) h Derivation of k-means Hypothesis h is parameterized by θ µ...µ k. Observed data X { x i } Hidden variables Z { z i,..., z ik }: z ik if input x i is generated by th k-th normal dist. For each input, k entries. First, start with defining ln p(y h). Deriving ln P (Y h) p(y i h ) p(x i, z i, z i2,..., z ik h ) P kj p 2πσ 2 e 2σ 2 z ij (x i µ j )2 Note that the vector z i,..., z ik contains only a single and all the rest are 0. ln P (Y h ) ln p(y i h ) ln p(y i h ) 2πσ 2 2σ 2 kx z ij (x i µ j )2 A j 59 60

16 Deriving E[ln P (Y h)] Since P (Y h ) is a linear function of z ij, and since E[f(z)] f(e[z]), 2 0 E[ln P (Y h )] E Thus, p2πσ 2 p2πσ 2 3 kx 2σ 2 z ij (x i µ j )2 A5 j kx 2σ 2 E[z ij ](x i µ j )2 A j With Finding argmax h Q(h h) E[z ij ] e 2σ 2 (x i µ j )2 P 2 n e 2σ 2 (x i µ n) 2 we want to find h such that 0 argmax h Q(h h) h p2πσ 2 argmin h kx E[z ij ](x i µ j )2, j kx 2σ 2 E[z ij ](x i µ j )2 A j Q(h h) Q( µ,..., µ k 0 2πσ 2 2σ 2 kx E[z ij ](x i µ j )2 A j which is minimized by µ j P m E[z ij]x i P m E[z. ij] 6 62 Deriving the Update Rule Set the derivative of the quantity to be minimized to be zero: µ j kx E[z ij ](x i µ j )2 j µ j E[z ij ](x i µ j )2 2 E[z ij ](x i µ j ) 0 E[z ij ]x i E[z ij ]µ j 0 E[z ij ]x i µ j E[z ij ] µ j P m E[z ij ]x i P m E[z ij ] See Bishop (995) Neural Networks for Pattern Recognition, Oxford U Press. pp

Notes on Machine Learning for and

Notes on Machine Learning for and Notes on Machine Learning for 16.410 and 16.413 (Notes adapted from Tom Mitchell and Andrew Moore.) Choosing Hypotheses Generally want the most probable hypothesis given the training data Maximum a posteriori

More information

CSCE 478/878 Lecture 6: Bayesian Learning

CSCE 478/878 Lecture 6: Bayesian Learning Bayesian Methods Not all hypotheses are created equal (even if they are all consistent with the training data) Outline CSCE 478/878 Lecture 6: Bayesian Learning Stephen D. Scott (Adapted from Tom Mitchell

More information

Two Roles for Bayesian Methods

Two Roles for Bayesian Methods Bayesian Learning Bayes Theorem MAP, ML hypotheses MAP learners Minimum description length principle Bayes optimal classifier Naive Bayes learner Example: Learning over text data Bayesian belief networks

More information

Stephen Scott.

Stephen Scott. 1 / 28 ian ian Optimal (Adapted from Ethem Alpaydin and Tom Mitchell) Naïve Nets sscott@cse.unl.edu 2 / 28 ian Optimal Naïve Nets Might have reasons (domain information) to favor some hypotheses/predictions

More information

Lecture 9: Bayesian Learning

Lecture 9: Bayesian Learning Lecture 9: Bayesian Learning Cognitive Systems II - Machine Learning Part II: Special Aspects of Concept Learning Bayes Theorem, MAL / ML hypotheses, Brute-force MAP LEARNING, MDL principle, Bayes Optimal

More information

Bayesian Learning. Examples. Conditional Probability. Two Roles for Bayesian Methods. Prior Probability and Random Variables. The Chain Rule P (B)

Bayesian Learning. Examples. Conditional Probability. Two Roles for Bayesian Methods. Prior Probability and Random Variables. The Chain Rule P (B) Examples My mood can take 2 possible values: happy, sad. The weather can take 3 possible vales: sunny, rainy, cloudy My friends know me pretty well and say that: P(Mood=happy Weather=rainy) = 0.25 P(Mood=happy

More information

BAYESIAN LEARNING. [Read Ch. 6] [Suggested exercises: 6.1, 6.2, 6.6]

BAYESIAN LEARNING. [Read Ch. 6] [Suggested exercises: 6.1, 6.2, 6.6] 1 BAYESIAN LEARNING [Read Ch. 6] [Suggested exercises: 6.1, 6.2, 6.6] Bayes Theorem MAP, ML hypotheses, MAP learners Minimum description length principle Bayes optimal classifier, Naive Bayes learner Example:

More information

MODULE -4 BAYEIAN LEARNING

MODULE -4 BAYEIAN LEARNING MODULE -4 BAYEIAN LEARNING CONTENT Introduction Bayes theorem Bayes theorem and concept learning Maximum likelihood and Least Squared Error Hypothesis Maximum likelihood Hypotheses for predicting probabilities

More information

CSCE 478/878 Lecture 6: Bayesian Learning and Graphical Models. Stephen Scott. Introduction. Outline. Bayes Theorem. Formulas

CSCE 478/878 Lecture 6: Bayesian Learning and Graphical Models. Stephen Scott. Introduction. Outline. Bayes Theorem. Formulas ian ian ian Might have reasons (domain information) to favor some hypotheses/predictions over others a priori ian methods work with probabilities, and have two main roles: Naïve Nets (Adapted from Ethem

More information

Bayesian Learning. CSL603 - Fall 2017 Narayanan C Krishnan

Bayesian Learning. CSL603 - Fall 2017 Narayanan C Krishnan Bayesian Learning CSL603 - Fall 2017 Narayanan C Krishnan ckn@iitrpr.ac.in Outline Bayes Theorem MAP Learners Bayes optimal classifier Naïve Bayes classifier Example text classification Bayesian networks

More information

Bayesian Learning. Chapter 6: Bayesian Learning. Bayes Theorem. Roles for Bayesian Methods. CS 536: Machine Learning Littman (Wu, TA)

Bayesian Learning. Chapter 6: Bayesian Learning. Bayes Theorem. Roles for Bayesian Methods. CS 536: Machine Learning Littman (Wu, TA) Bayesian Learning Chapter 6: Bayesian Learning CS 536: Machine Learning Littan (Wu, TA) [Read Ch. 6, except 6.3] [Suggested exercises: 6.1, 6.2, 6.6] Bayes Theore MAP, ML hypotheses MAP learners Miniu

More information

Statistical learning. Chapter 20, Sections 1 4 1

Statistical learning. Chapter 20, Sections 1 4 1 Statistical learning Chapter 20, Sections 1 4 Chapter 20, Sections 1 4 1 Outline Bayesian learning Maximum a posteriori and maximum likelihood learning Bayes net learning ML parameter learning with complete

More information

Bayesian Learning. Bayesian Learning Criteria

Bayesian Learning. Bayesian Learning Criteria Bayesian Learning In Bayesian learning, we are interested in the probability of a hypothesis h given the dataset D. By Bayes theorem: P (h D) = P (D h)p (h) P (D) Other useful formulas to remember are:

More information

Bayesian Learning. Bayes Theorem. MAP, MLhypotheses. MAP learners. Minimum description length principle. Bayes optimal classier. Naive Bayes learner

Bayesian Learning. Bayes Theorem. MAP, MLhypotheses. MAP learners. Minimum description length principle. Bayes optimal classier. Naive Bayes learner Bayesian Learning [Read Ch. 6] [Suggested exercises: 6.1, 6.2, 6.6] Bayes Theorem MAP, MLhypotheses MAP learners Minimum description length principle Bayes optimal classier Naive Bayes learner Example:

More information

Bayesian Learning Features of Bayesian learning methods:

Bayesian Learning Features of Bayesian learning methods: Bayesian Learning Features of Bayesian learning methods: Each observed training example can incrementally decrease or increase the estimated probability that a hypothesis is correct. This provides a more

More information

Machine Learning. Bayesian Learning.

Machine Learning. Bayesian Learning. Machine Learning Bayesian Learning Prof. Dr. Martin Riedmiller AG Maschinelles Lernen und Natürlichsprachliche Systeme Institut für Informatik Technische Fakultät Albert-Ludwigs-Universität Freiburg Martin.Riedmiller@uos.de

More information

Bayesian Learning. Remark on Conditional Probabilities and Priors. Two Roles for Bayesian Methods. [Read Ch. 6] [Suggested exercises: 6.1, 6.2, 6.

Bayesian Learning. Remark on Conditional Probabilities and Priors. Two Roles for Bayesian Methods. [Read Ch. 6] [Suggested exercises: 6.1, 6.2, 6. Machine Learning Bayesian Learning Bayes Theorem Bayesian Learning [Read Ch. 6] [Suggested exercises: 6.1, 6.2, 6.6] Prof. Dr. Martin Riedmiller AG Maschinelles Lernen und Natürlichsprachliche Systeme

More information

Machine Learning (CS 567)

Machine Learning (CS 567) Machine Learning (CS 567) Time: T-Th 5:00pm - 6:20pm Location: GFS 118 Instructor: Sofus A. Macskassy (macskass@usc.edu) Office: SAL 216 Office hours: by appointment Teaching assistant: Cheol Han (cheolhan@usc.edu)

More information

Algorithmisches Lernen/Machine Learning

Algorithmisches Lernen/Machine Learning Algorithmisches Lernen/Machine Learning Part 1: Stefan Wermter Introduction Connectionist Learning (e.g. Neural Networks) Decision-Trees, Genetic Algorithms Part 2: Norman Hendrich Support-Vector Machines

More information

Naïve Bayes classification

Naïve Bayes classification Naïve Bayes classification 1 Probability theory Random variable: a variable whose possible values are numerical outcomes of a random phenomenon. Examples: A person s height, the outcome of a coin toss

More information

Introduction to Bayesian Learning. Machine Learning Fall 2018

Introduction to Bayesian Learning. Machine Learning Fall 2018 Introduction to Bayesian Learning Machine Learning Fall 2018 1 What we have seen so far What does it mean to learn? Mistake-driven learning Learning by counting (and bounding) number of mistakes PAC learnability

More information

Naïve Bayes classification. p ij 11/15/16. Probability theory. Probability theory. Probability theory. X P (X = x i )=1 i. Marginal Probability

Naïve Bayes classification. p ij 11/15/16. Probability theory. Probability theory. Probability theory. X P (X = x i )=1 i. Marginal Probability Probability theory Naïve Bayes classification Random variable: a variable whose possible values are numerical outcomes of a random phenomenon. s: A person s height, the outcome of a coin toss Distinguish

More information

Probability Based Learning

Probability Based Learning Probability Based Learning Lecture 7, DD2431 Machine Learning J. Sullivan, A. Maki September 2013 Advantages of Probability Based Methods Work with sparse training data. More powerful than deterministic

More information

Machine Learning. Bayesian Learning. Acknowledgement Slides courtesy of Martin Riedmiller

Machine Learning. Bayesian Learning. Acknowledgement Slides courtesy of Martin Riedmiller Machine Learning Bayesian Learning Dr. Joschka Boedecker AG Maschinelles Lernen und Natürlichsprachliche Systeme Institut für Informatik Technische Fakultät Albert-Ludwigs-Universität Freiburg jboedeck@informatik.uni-freiburg.de

More information

Statistical learning. Chapter 20, Sections 1 3 1

Statistical learning. Chapter 20, Sections 1 3 1 Statistical learning Chapter 20, Sections 1 3 Chapter 20, Sections 1 3 1 Outline Bayesian learning Maximum a posteriori and maximum likelihood learning Bayes net learning ML parameter learning with complete

More information

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2014

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2014 Bayesian Networks: Construction, Inference, Learning and Causal Interpretation Volker Tresp Summer 2014 1 Introduction So far we were mostly concerned with supervised learning: we predicted one or several

More information

Probabilistic classification CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2016

Probabilistic classification CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2016 Probabilistic classification CE-717: Machine Learning Sharif University of Technology M. Soleymani Fall 2016 Topics Probabilistic approach Bayes decision theory Generative models Gaussian Bayes classifier

More information

9/12/17. Types of learning. Modeling data. Supervised learning: Classification. Supervised learning: Regression. Unsupervised learning: Clustering

9/12/17. Types of learning. Modeling data. Supervised learning: Classification. Supervised learning: Regression. Unsupervised learning: Clustering Types of learning Modeling data Supervised: we know input and targets Goal is to learn a model that, given input data, accurately predicts target data Unsupervised: we know the input only and want to make

More information

Bayesian Learning Extension

Bayesian Learning Extension Bayesian Learning Extension This document will go over one of the most useful forms of statistical inference known as Baye s Rule several of the concepts that extend from it. Named after Thomas Bayes this

More information

Naïve Bayes. Jia-Bin Huang. Virginia Tech Spring 2019 ECE-5424G / CS-5824

Naïve Bayes. Jia-Bin Huang. Virginia Tech Spring 2019 ECE-5424G / CS-5824 Naïve Bayes Jia-Bin Huang ECE-5424G / CS-5824 Virginia Tech Spring 2019 Administrative HW 1 out today. Please start early! Office hours Chen: Wed 4pm-5pm Shih-Yang: Fri 3pm-4pm Location: Whittemore 266

More information

CS6220: DATA MINING TECHNIQUES

CS6220: DATA MINING TECHNIQUES CS6220: DATA MINING TECHNIQUES Chapter 8&9: Classification: Part 3 Instructor: Yizhou Sun yzsun@ccs.neu.edu March 12, 2013 Midterm Report Grade Distribution 90-100 10 80-89 16 70-79 8 60-69 4

More information

CS6220: DATA MINING TECHNIQUES

CS6220: DATA MINING TECHNIQUES CS6220: DATA MINING TECHNIQUES Matrix Data: Classification: Part 2 Instructor: Yizhou Sun yzsun@ccs.neu.edu September 21, 2014 Methods to Learn Matrix Data Set Data Sequence Data Time Series Graph & Network

More information

Introduction to Bayesian Learning

Introduction to Bayesian Learning Course Information Introduction Introduction to Bayesian Learning Davide Bacciu Dipartimento di Informatica Università di Pisa bacciu@di.unipi.it Apprendimento Automatico: Fondamenti - A.A. 2016/2017 Outline

More information

Bayesian Machine Learning

Bayesian Machine Learning Bayesian Machine Learning Andrew Gordon Wilson ORIE 6741 Lecture 4 Occam s Razor, Model Construction, and Directed Graphical Models https://people.orie.cornell.edu/andrew/orie6741 Cornell University September

More information

Statistical Learning. Philipp Koehn. 10 November 2015

Statistical Learning. Philipp Koehn. 10 November 2015 Statistical Learning Philipp Koehn 10 November 2015 Outline 1 Learning agents Inductive learning Decision tree learning Measuring learning performance Bayesian learning Maximum a posteriori and maximum

More information

Topics. Bayesian Learning. What is Bayesian Learning? Objectives for Bayesian Learning

Topics. Bayesian Learning. What is Bayesian Learning? Objectives for Bayesian Learning Topics Bayesian Learning Sattiraju Prabhakar CS898O: ML Wichita State University Objectives for Bayesian Learning Bayes Theorem and MAP Bayes Optimal Classifier Naïve Bayes Classifier An Example Classifying

More information

Bayesian Learning. Artificial Intelligence Programming. 15-0: Learning vs. Deduction

Bayesian Learning. Artificial Intelligence Programming. 15-0: Learning vs. Deduction 15-0: Learning vs. Deduction Artificial Intelligence Programming Bayesian Learning Chris Brooks Department of Computer Science University of San Francisco So far, we ve seen two types of reasoning: Deductive

More information

Lecture : Probabilistic Machine Learning

Lecture : Probabilistic Machine Learning Lecture : Probabilistic Machine Learning Riashat Islam Reasoning and Learning Lab McGill University September 11, 2018 ML : Many Methods with Many Links Modelling Views of Machine Learning Machine Learning

More information

Mathematical Formulation of Our Example

Mathematical Formulation of Our Example Mathematical Formulation of Our Example We define two binary random variables: open and, where is light on or light off. Our question is: What is? Computer Vision 1 Combining Evidence Suppose our robot

More information

Statistical learning. Chapter 20, Sections 1 3 1

Statistical learning. Chapter 20, Sections 1 3 1 Statistical learning Chapter 20, Sections 1 3 Chapter 20, Sections 1 3 1 Outline Bayesian learning Maximum a posteriori and maximum likelihood learning Bayes net learning ML parameter learning with complete

More information

Density Estimation. Seungjin Choi

Density Estimation. Seungjin Choi Density Estimation Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr http://mlg.postech.ac.kr/

More information

Uncertainty. Variables. assigns to each sentence numerical degree of belief between 0 and 1. uncertainty

Uncertainty. Variables. assigns to each sentence numerical degree of belief between 0 and 1. uncertainty Bayes Classification n Uncertainty & robability n Baye's rule n Choosing Hypotheses- Maximum a posteriori n Maximum Likelihood - Baye's concept learning n Maximum Likelihood of real valued function n Bayes

More information

Introduction to Machine Learning. Maximum Likelihood and Bayesian Inference. Lecturers: Eran Halperin, Lior Wolf

Introduction to Machine Learning. Maximum Likelihood and Bayesian Inference. Lecturers: Eran Halperin, Lior Wolf 1 Introduction to Machine Learning Maximum Likelihood and Bayesian Inference Lecturers: Eran Halperin, Lior Wolf 2014-15 We know that X ~ B(n,p), but we do not know p. We get a random sample from X, a

More information

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2016

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2016 Bayesian Networks: Construction, Inference, Learning and Causal Interpretation Volker Tresp Summer 2016 1 Introduction So far we were mostly concerned with supervised learning: we predicted one or several

More information

Lecture 4: Probabilistic Learning. Estimation Theory. Classification with Probability Distributions

Lecture 4: Probabilistic Learning. Estimation Theory. Classification with Probability Distributions DD2431 Autumn, 2014 1 2 3 Classification with Probability Distributions Estimation Theory Classification in the last lecture we assumed we new: P(y) Prior P(x y) Lielihood x2 x features y {ω 1,..., ω K

More information

Machine Learning. Lecture 4: Regularization and Bayesian Statistics. Feng Li. https://funglee.github.io

Machine Learning. Lecture 4: Regularization and Bayesian Statistics. Feng Li. https://funglee.github.io Machine Learning Lecture 4: Regularization and Bayesian Statistics Feng Li fli@sdu.edu.cn https://funglee.github.io School of Computer Science and Technology Shandong University Fall 207 Overfitting Problem

More information

Bayesian Methods: Naïve Bayes

Bayesian Methods: Naïve Bayes Bayesian Methods: aïve Bayes icholas Ruozzi University of Texas at Dallas based on the slides of Vibhav Gogate Last Time Parameter learning Learning the parameter of a simple coin flipping model Prior

More information

Machine Learning. Bayesian Learning. Michael M. Richter. Michael M. Richter

Machine Learning. Bayesian Learning. Michael M. Richter.   Michael M. Richter Machine Learning Bayesian Learning Email: mrichter@ucalgary.ca Topic This is concept learning the probabilistic way. That means, everything that is stated is done in an exact way but not always true. That

More information

The Naïve Bayes Classifier. Machine Learning Fall 2017

The Naïve Bayes Classifier. Machine Learning Fall 2017 The Naïve Bayes Classifier Machine Learning Fall 2017 1 Today s lecture The naïve Bayes Classifier Learning the naïve Bayes Classifier Practical concerns 2 Today s lecture The naïve Bayes Classifier Learning

More information

Machine Learning

Machine Learning Machine Learning 10-701 Tom M. Mitchell Machine Learning Department Carnegie Mellon University February 1, 2011 Today: Generative discriminative classifiers Linear regression Decomposition of error into

More information

Graphical Models and Kernel Methods

Graphical Models and Kernel Methods Graphical Models and Kernel Methods Jerry Zhu Department of Computer Sciences University of Wisconsin Madison, USA MLSS June 17, 2014 1 / 123 Outline Graphical Models Probabilistic Inference Directed vs.

More information

Latent Variable Models

Latent Variable Models Latent Variable Models Stefano Ermon, Aditya Grover Stanford University Lecture 5 Stefano Ermon, Aditya Grover (AI Lab) Deep Generative Models Lecture 5 1 / 31 Recap of last lecture 1 Autoregressive models:

More information

Bayesian Learning. HT2015: SC4 Statistical Data Mining and Machine Learning. Maximum Likelihood Principle. The Bayesian Learning Framework

Bayesian Learning. HT2015: SC4 Statistical Data Mining and Machine Learning. Maximum Likelihood Principle. The Bayesian Learning Framework HT5: SC4 Statistical Data Mining and Machine Learning Dino Sejdinovic Department of Statistics Oxford http://www.stats.ox.ac.uk/~sejdinov/sdmml.html Maximum Likelihood Principle A generative model for

More information

Graphical Models for Collaborative Filtering

Graphical Models for Collaborative Filtering Graphical Models for Collaborative Filtering Le Song Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012 Sequence modeling HMM, Kalman Filter, etc.: Similarity: the same graphical model topology,

More information

Learning with Probabilities

Learning with Probabilities Learning with Probabilities CS194-10 Fall 2011 Lecture 15 CS194-10 Fall 2011 Lecture 15 1 Outline Bayesian learning eliminates arbitrary loss functions and regularizers facilitates incorporation of prior

More information

Stochastic Gradient Descent

Stochastic Gradient Descent Stochastic Gradient Descent Machine Learning CSE546 Carlos Guestrin University of Washington October 9, 2013 1 Logistic Regression Logistic function (or Sigmoid): Learn P(Y X) directly Assume a particular

More information

Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen. Bayesian Learning. Tobias Scheffer, Niels Landwehr

Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen. Bayesian Learning. Tobias Scheffer, Niels Landwehr Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen Bayesian Learning Tobias Scheffer, Niels Landwehr Remember: Normal Distribution Distribution over x. Density function with parameters

More information

Bayesian Networks BY: MOHAMAD ALSABBAGH

Bayesian Networks BY: MOHAMAD ALSABBAGH Bayesian Networks BY: MOHAMAD ALSABBAGH Outlines Introduction Bayes Rule Bayesian Networks (BN) Representation Size of a Bayesian Network Inference via BN BN Learning Dynamic BN Introduction Conditional

More information

CS 446 Machine Learning Fall 2016 Nov 01, Bayesian Learning

CS 446 Machine Learning Fall 2016 Nov 01, Bayesian Learning CS 446 Machine Learning Fall 206 Nov 0, 206 Bayesian Learning Professor: Dan Roth Scribe: Ben Zhou, C. Cervantes Overview Bayesian Learning Naive Bayes Logistic Regression Bayesian Learning So far, we

More information

Classification CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2012

Classification CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2012 Classification CE-717: Machine Learning Sharif University of Technology M. Soleymani Fall 2012 Topics Discriminant functions Logistic regression Perceptron Generative models Generative vs. discriminative

More information

Notes on Machine Learning for and

Notes on Machine Learning for and Notes on Machine Learning for 16.410 and 16.413 (Notes adapted from Tom Mitchell and Andrew Moore.) Learning = improving with experience Improve over task T (e.g, Classification, control tasks) with respect

More information

Midterm Review CS 6375: Machine Learning. Vibhav Gogate The University of Texas at Dallas

Midterm Review CS 6375: Machine Learning. Vibhav Gogate The University of Texas at Dallas Midterm Review CS 6375: Machine Learning Vibhav Gogate The University of Texas at Dallas Machine Learning Supervised Learning Unsupervised Learning Reinforcement Learning Parametric Y Continuous Non-parametric

More information

Clustering K-means. Clustering images. Machine Learning CSE546 Carlos Guestrin University of Washington. November 4, 2014.

Clustering K-means. Clustering images. Machine Learning CSE546 Carlos Guestrin University of Washington. November 4, 2014. Clustering K-means Machine Learning CSE546 Carlos Guestrin University of Washington November 4, 2014 1 Clustering images Set of Images [Goldberger et al.] 2 1 K-means Randomly initialize k centers µ (0)

More information

Logistic Regression. Jia-Bin Huang. Virginia Tech Spring 2019 ECE-5424G / CS-5824

Logistic Regression. Jia-Bin Huang. Virginia Tech Spring 2019 ECE-5424G / CS-5824 Logistic Regression Jia-Bin Huang ECE-5424G / CS-5824 Virginia Tech Spring 2019 Administrative Please start HW 1 early! Questions are welcome! Two principles for estimating parameters Maximum Likelihood

More information

Lecture 4: Probabilistic Learning

Lecture 4: Probabilistic Learning DD2431 Autumn, 2015 1 Maximum Likelihood Methods Maximum A Posteriori Methods Bayesian methods 2 Classification vs Clustering Heuristic Example: K-means Expectation Maximization 3 Maximum Likelihood Methods

More information

T Machine Learning: Basic Principles

T Machine Learning: Basic Principles Machine Learning: Basic Principles Bayesian Networks Laboratory of Computer and Information Science (CIS) Department of Computer Science and Engineering Helsinki University of Technology (TKK) Autumn 2007

More information

Introduction to Machine Learning

Introduction to Machine Learning Introduction to Machine Learning CS4375 --- Fall 2018 Bayesian a Learning Reading: Sections 13.1-13.6, 20.1-20.2, R&N Sections 6.1-6.3, 6.7, 6.9, Mitchell 1 Uncertainty Most real-world problems deal with

More information

Bayesian Inference. Definitions from Probability: Naive Bayes Classifiers: Advantages and Disadvantages of Naive Bayes Classifiers:

Bayesian Inference. Definitions from Probability: Naive Bayes Classifiers: Advantages and Disadvantages of Naive Bayes Classifiers: Bayesian Inference The purpose of this document is to review belief networks and naive Bayes classifiers. Definitions from Probability: Belief networks: Naive Bayes Classifiers: Advantages and Disadvantages

More information

Probabilistic Graphical Models

Probabilistic Graphical Models Probabilistic Graphical Models Brown University CSCI 295-P, Spring 213 Prof. Erik Sudderth Lecture 11: Inference & Learning Overview, Gaussian Graphical Models Some figures courtesy Michael Jordan s draft

More information

Gaussian Mixture Models

Gaussian Mixture Models Gaussian Mixture Models Pradeep Ravikumar Co-instructor: Manuela Veloso Machine Learning 10-701 Some slides courtesy of Eric Xing, Carlos Guestrin (One) bad case for K- means Clusters may overlap Some

More information

[read Chapter 2] [suggested exercises 2.2, 2.3, 2.4, 2.6] General-to-specific ordering over hypotheses

[read Chapter 2] [suggested exercises 2.2, 2.3, 2.4, 2.6] General-to-specific ordering over hypotheses 1 CONCEPT LEARNING AND THE GENERAL-TO-SPECIFIC ORDERING [read Chapter 2] [suggested exercises 2.2, 2.3, 2.4, 2.6] Learning from examples General-to-specific ordering over hypotheses Version spaces and

More information

Introduction to Machine Learning

Introduction to Machine Learning Uncertainty Introduction to Machine Learning CS4375 --- Fall 2018 a Bayesian Learning Reading: Sections 13.1-13.6, 20.1-20.2, R&N Sections 6.1-6.3, 6.7, 6.9, Mitchell Most real-world problems deal with

More information

Bayesian Learning (II)

Bayesian Learning (II) Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen Bayesian Learning (II) Niels Landwehr Overview Probabilities, expected values, variance Basic concepts of Bayesian learning MAP

More information

Introduction to Machine Learning. Maximum Likelihood and Bayesian Inference. Lecturers: Eran Halperin, Yishay Mansour, Lior Wolf

Introduction to Machine Learning. Maximum Likelihood and Bayesian Inference. Lecturers: Eran Halperin, Yishay Mansour, Lior Wolf 1 Introduction to Machine Learning Maximum Likelihood and Bayesian Inference Lecturers: Eran Halperin, Yishay Mansour, Lior Wolf 2013-14 We know that X ~ B(n,p), but we do not know p. We get a random sample

More information

Lecture 2: Simple Classifiers

Lecture 2: Simple Classifiers CSC 412/2506 Winter 2018 Probabilistic Learning and Reasoning Lecture 2: Simple Classifiers Slides based on Rich Zemel s All lecture slides will be available on the course website: www.cs.toronto.edu/~jessebett/csc412

More information

Midterm, Fall 2003

Midterm, Fall 2003 5-78 Midterm, Fall 2003 YOUR ANDREW USERID IN CAPITAL LETTERS: YOUR NAME: There are 9 questions. The ninth may be more time-consuming and is worth only three points, so do not attempt 9 unless you are

More information

Bayesian Approaches Data Mining Selected Technique

Bayesian Approaches Data Mining Selected Technique Bayesian Approaches Data Mining Selected Technique Henry Xiao xiao@cs.queensu.ca School of Computing Queen s University Henry Xiao CISC 873 Data Mining p. 1/17 Probabilistic Bases Review the fundamentals

More information

Uncertainty and Bayesian Networks

Uncertainty and Bayesian Networks Uncertainty and Bayesian Networks Tutorial 3 Tutorial 3 1 Outline Uncertainty Probability Syntax and Semantics for Uncertainty Inference Independence and Bayes Rule Syntax and Semantics for Bayesian Networks

More information

CMPT Machine Learning. Bayesian Learning Lecture Scribe for Week 4 Jan 30th & Feb 4th

CMPT Machine Learning. Bayesian Learning Lecture Scribe for Week 4 Jan 30th & Feb 4th CMPT 882 - Machine Learning Bayesian Learning Lecture Scribe for Week 4 Jan 30th & Feb 4th Stephen Fagan sfagan@sfu.ca Overview: Introduction - Who was Bayes? - Bayesian Statistics Versus Classical Statistics

More information

Expectation Maximization (EM)

Expectation Maximization (EM) Expectation Maximization (EM) The Expectation Maximization (EM) algorithm is one approach to unsupervised, semi-supervised, or lightly supervised learning. In this kind of learning either no labels are

More information

Lecture 3: Pattern Classification

Lecture 3: Pattern Classification EE E6820: Speech & Audio Processing & Recognition Lecture 3: Pattern Classification 1 2 3 4 5 The problem of classification Linear and nonlinear classifiers Probabilistic classification Gaussians, mixtures

More information

COMP 328: Machine Learning

COMP 328: Machine Learning COMP 328: Machine Learning Lecture 2: Naive Bayes Classifiers Nevin L. Zhang Department of Computer Science and Engineering The Hong Kong University of Science and Technology Spring 2010 Nevin L. Zhang

More information

Expectation Maximization, and Learning from Partly Unobserved Data (part 2)

Expectation Maximization, and Learning from Partly Unobserved Data (part 2) Expectation Maximization, and Learning from Partly Unobserved Data (part 2) Machine Learning 10-701 April 2005 Tom M. Mitchell Carnegie Mellon University Clustering Outline K means EM: Mixture of Gaussians

More information

Representation. Stefano Ermon, Aditya Grover. Stanford University. Lecture 2

Representation. Stefano Ermon, Aditya Grover. Stanford University. Lecture 2 Representation Stefano Ermon, Aditya Grover Stanford University Lecture 2 Stefano Ermon, Aditya Grover (AI Lab) Deep Generative Models Lecture 2 1 / 32 Learning a generative model We are given a training

More information

ECE521 week 3: 23/26 January 2017

ECE521 week 3: 23/26 January 2017 ECE521 week 3: 23/26 January 2017 Outline Probabilistic interpretation of linear regression - Maximum likelihood estimation (MLE) - Maximum a posteriori (MAP) estimation Bias-variance trade-off Linear

More information

Machine Learning

Machine Learning Machine Learning 10-601 Tom M. Mitchell Machine Learning Department Carnegie Mellon University February 4, 2015 Today: Generative discriminative classifiers Linear regression Decomposition of error into

More information

10-701/15-781, Machine Learning: Homework 4

10-701/15-781, Machine Learning: Homework 4 10-701/15-781, Machine Learning: Homewor 4 Aarti Singh Carnegie Mellon University ˆ The assignment is due at 10:30 am beginning of class on Mon, Nov 15, 2010. ˆ Separate you answers into five parts, one

More information

CSCI-567: Machine Learning (Spring 2019)

CSCI-567: Machine Learning (Spring 2019) CSCI-567: Machine Learning (Spring 2019) Prof. Victor Adamchik U of Southern California Mar. 19, 2019 March 19, 2019 1 / 43 Administration March 19, 2019 2 / 43 Administration TA3 is due this week March

More information

Probabilistic Graphical Models

Probabilistic Graphical Models Probabilistic Graphical Models Brown University CSCI 2950-P, Spring 2013 Prof. Erik Sudderth Lecture 9: Expectation Maximiation (EM) Algorithm, Learning in Undirected Graphical Models Some figures courtesy

More information

LEARNING WITH BAYESIAN NETWORKS

LEARNING WITH BAYESIAN NETWORKS LEARNING WITH BAYESIAN NETWORKS Author: David Heckerman Presented by: Dilan Kiley Adapted from slides by: Yan Zhang - 2006, Jeremy Gould 2013, Chip Galusha -2014 Jeremy Gould 2013Chip Galus May 6th, 2016

More information

Logistics. Naïve Bayes & Expectation Maximization. 573 Schedule. Coming Soon. Estimation Models. Topics

Logistics. Naïve Bayes & Expectation Maximization. 573 Schedule. Coming Soon. Estimation Models. Topics Logistics Naïve Bayes & Expectation Maximization CSE 7 eam Meetings Midterm Open book, notes Studying See AIMA exercises Daniel S. Weld Daniel S. Weld 7 Schedule Selected opics Coming Soon Selected opics

More information

Midterm Review CS 7301: Advanced Machine Learning. Vibhav Gogate The University of Texas at Dallas

Midterm Review CS 7301: Advanced Machine Learning. Vibhav Gogate The University of Texas at Dallas Midterm Review CS 7301: Advanced Machine Learning Vibhav Gogate The University of Texas at Dallas Supervised Learning Issues in supervised learning What makes learning hard Point Estimation: MLE vs Bayesian

More information

Outline. Training Examples for EnjoySport. 2 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997

Outline. Training Examples for EnjoySport. 2 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997 Outline Training Examples for EnjoySport Learning from examples General-to-specific ordering over hypotheses [read Chapter 2] [suggested exercises 2.2, 2.3, 2.4, 2.6] Version spaces and candidate elimination

More information

Machine Learning, Fall 2012 Homework 2

Machine Learning, Fall 2012 Homework 2 0-60 Machine Learning, Fall 202 Homework 2 Instructors: Tom Mitchell, Ziv Bar-Joseph TA in charge: Selen Uguroglu email: sugurogl@cs.cmu.edu SOLUTIONS Naive Bayes, 20 points Problem. Basic concepts, 0

More information

{ p if x = 1 1 p if x = 0

{ p if x = 1 1 p if x = 0 Discrete random variables Probability mass function Given a discrete random variable X taking values in X = {v 1,..., v m }, its probability mass function P : X [0, 1] is defined as: P (v i ) = Pr[X =

More information

A brief introduction to Conditional Random Fields

A brief introduction to Conditional Random Fields A brief introduction to Conditional Random Fields Mark Johnson Macquarie University April, 2005, updated October 2010 1 Talk outline Graphical models Maximum likelihood and maximum conditional likelihood

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 3 Linear

More information

CSE 473: Artificial Intelligence Autumn Topics

CSE 473: Artificial Intelligence Autumn Topics CSE 473: Artificial Intelligence Autumn 2014 Bayesian Networks Learning II Dan Weld Slides adapted from Jack Breese, Dan Klein, Daphne Koller, Stuart Russell, Andrew Moore & Luke Zettlemoyer 1 473 Topics

More information

Probabilistic Classification

Probabilistic Classification Bayesian Networks Probabilistic Classification Goal: Gather Labeled Training Data Build/Learn a Probability Model Use the model to infer class labels for unlabeled data points Example: Spam Filtering...

More information

Introduction: MLE, MAP, Bayesian reasoning (28/8/13)

Introduction: MLE, MAP, Bayesian reasoning (28/8/13) STA561: Probabilistic machine learning Introduction: MLE, MAP, Bayesian reasoning (28/8/13) Lecturer: Barbara Engelhardt Scribes: K. Ulrich, J. Subramanian, N. Raval, J. O Hollaren 1 Classifiers In this

More information