Bayesian Learning. Bayes Theorem. MAP, MLhypotheses. MAP learners. Minimum description length principle. Bayes optimal classier. Naive Bayes learner

Size: px
Start display at page:

Download "Bayesian Learning. Bayes Theorem. MAP, MLhypotheses. MAP learners. Minimum description length principle. Bayes optimal classier. Naive Bayes learner"

Transcription

1 Bayesian Learning [Read Ch. 6] [Suggested exercises: 6.1, 6.2, 6.6] Bayes Theorem MAP, MLhypotheses MAP learners Minimum description length principle Bayes optimal classier Naive Bayes learner Example: Learning over text data Bayesian belief networks Expectation Maximization algorithm 125 lecture slides for textbook Machine Learning, T. Mitchell, McGraw Hill, 1997

2 Two Roles for Bayesian Methods Provides practical learning algorithms: Naive Bayes learning Bayesian belief network learning Combine prior knowledge (prior probabilities) with observed data Requires prior probabilities Provides useful conceptual framework Provides \gold standard" for evaluating other learning algorithms Additional insight into Occam's razor 126 lecture slides for textbook Machine Learning, T. Mitchell, McGraw Hill, 1997

3 Bayes Theorem P (hjd) = P (Djh)P (h) P (D) P (h) = prior probability of hypothesis h P (D) = prior probability of training data D P (hjd) = probability ofh given D P (Djh) = probability ofd given h 127 lecture slides for textbook Machine Learning, T. Mitchell, McGraw Hill, 1997

4 Choosing Hypotheses P (hjd) = P (Djh)P (h) P (D) Generally want the most probable hypothesis given the training data Maximum a posteriori hypothesis h MAP : h MAP = arg max h2h P (hjd) = arg max h2h = arg max h2h P (Djh)P (h) P (D) P (Djh)P (h) If assume P (h i )=P (h j ) then can further simplify, and choose the Maximum likelihood (ML) hypothesis h ML = arg max h i 2H P (Djh i) 128 lecture slides for textbook Machine Learning, T. Mitchell, McGraw Hill, 1997

5 Bayes Theorem Does patient have cancer or not? A patient takes a lab test and the result comes back positive. The test returns a correct positive result in only 98% of the cases in which the disease is actually present, and a correct negative result in only 97% of the cases in which the disease is not present. Furthermore, :008 of the entire population have this cancer. P (cancer) = P (:cancer) = P (+jcancer) = P (;jcancer) = P (+j:cancer) = P (;j:cancer) = 129 lecture slides for textbook Machine Learning, T. Mitchell, McGraw Hill, 1997

6 Basic Formulas for Probabilities Product Rule: probability P (A ^ B) of a conjunction of two events A and B: P (A ^ B) =P (AjB)P (B) =P (BjA)P (A) Sum Rule: probability of a disjunction of two events A and B: P (A _ B) =P (A)+P (B) ; P (A ^ B) Theorem of total probability: if events A 1 ::: A n are mutually exclusive with P n i=1 P (A i) = 1, then P (B) = n X i=1 P (BjA i )P (A i ) 130 lecture slides for textbook Machine Learning, T. Mitchell, McGraw Hill, 1997

7 Brute Force MAP Hypothesis Learner 1. For each hypothesis h in H, calculate the posterior probability P (hjd) = P (Djh)P (h) P (D) 2. Output the hypothesis h MAP with the highest posterior probability h MAP = argmax h2h P (hjd) 131 lecture slides for textbook Machine Learning, T. Mitchell, McGraw Hill, 1997

8 Relation to Concept Learning Consider our usual concept learning task instance space X, hypothesis space H, training examples D consider the FindS learning algorithm (outputs most specic hypothesis from the version space VS H D ) What would Bayes rule produce as the MAP hypothesis? Does FindS output a MAP hypothesis?? 132 lecture slides for textbook Machine Learning, T. Mitchell, McGraw Hill, 1997

9 Relation to Concept Learning Assume xed set of instances hx 1 ::: x m i Assume D is the set of classications D = hc(x 1 ) ::: c(x m )i Choose P (Djh): 133 lecture slides for textbook Machine Learning, T. Mitchell, McGraw Hill, 1997

10 Relation to Concept Learning Assume xed set of instances hx 1 ::: x m i Assume D is the set of classications D = hc(x 1 ) ::: c(x m )i Choose P (Djh) P (Djh) = 1 if h consistent with D P (Djh) = 0 otherwise Choose P (h) to be uniform distribution P (h) = 1 jhj Then, P (hjd) = for all h in H 8 >< >: 1 jv S H D j if h is consistent with D 0 otherwise 134 lecture slides for textbook Machine Learning, T. Mitchell, McGraw Hill, 1997

11 Evolution of Posterior Probabilities P( h) P(h D1) P(h D1, D2) hypotheses hypotheses hypotheses ( a) ( b) ( c) 135 lecture slides for textbook Machine Learning, T. Mitchell, McGraw Hill, 1997

12 Characterizing Learning Algorithms by Equivalent MAP Learners Inductive system Training examples D Hypothesis space H Candidate Elimination Algorithm Output hypotheses Equivalent Bayesian inference system Training examples D Hypothesis space H P(h) uniform P(D h) = 0 if inconsistent, = 1 if consistent Brute force MAP learner Output hypotheses Prior assumptions made explicit 136 lecture slides for textbook Machine Learning, T. Mitchell, McGraw Hill, 1997

13 Learning A Real Valued Function y f e h ML Consider any real-valued target function f Training examples hx i d i i,whered i is noisy training value d i = f(x i )+e i e i is random variable (noise) drawn independently for each x i according to some Gaussian distribution with mean=0 Then the maximum likelihood hypothesis h ML is the one that minimizes the sum of squared errors: h ML = arg min h2h mx i=1 x (d i ; h(x i )) lecture slides for textbook Machine Learning, T. Mitchell, McGraw Hill, 1997

14 Learning A Real Valued Function h ML = argmax h2h = argmax h2h = argmax h2h p(djh) my i=1 my i=1 p(d i jh) 1 p e; ( d i ;h(x i ) 2 Maximize natural log of this instead... ) 2 h ML = argmax h2h = argmax h2h = argmax h2h = argmin h2h mx 1 ln p ; 1 i= mx ; 1 0 d B i ; h(x i i=1 2 mx i=1 mx i=1 0 ; (d i ; h(x i )) 2 (d i ; h(x i )) 2 1 C A d i ; h(x i ) 2 1 C A lecture slides for textbook Machine Learning, T. Mitchell, McGraw Hill, 1997

15 Learning to Predict Probabilities Consider predicting survival probability from patient data Training examples hx i d i i,whered i is 1 or 0 Want to train neural network to output a probability given x i (not a 0 or 1) In this case can show h ML = argmax h2h mx i=1 d i ln h(x i )+(1; d i )ln(1; h(x i )) Weight update rule for a sigmoid unit: where w jk w jk = m X i=1 w jk +w jk (d i ; h(x i )) x ijk 139 lecture slides for textbook Machine Learning, T. Mitchell, McGraw Hill, 1997

16 Minimum Description Length Principle Occam's razor: prefer the shortest hypothesis MDL: prefer the hypothesis h that minimizes h MDL = argmin L C1 (h)+l C2 (Djh) h2h where L C (x) is the description length of x under encoding C Example: H = decision trees, D = training data labels L C1 (h) is # bits to describe tree h L C2 (Djh) is # bits to describe D given h { Note L C2 (Djh) = 0 if examples classied perfectly by h. Need only describe exceptions Hence h MDL trades o tree size for training errors 140 lecture slides for textbook Machine Learning, T. Mitchell, McGraw Hill, 1997

17 Minimum Description Length Principle h MAP = arg max h2h P (Djh)P (h) = arg max h2h log 2 P (Djh) + log 2 P (h) = arg min h2h ; log 2 P (Djh) ; log 2 Interesting fact from information theory: P (h) (1) The optimal (shortest expected coding length) code for an event with probability p is ; log 2 p bits. So interpret (1): ;log 2 P (h) is length of h under optimal code ;log 2 P (Djh) is length of D given h under optimal code! prefer the hypothesis that minimizes length(h)+length(misclassif ications) 141 lecture slides for textbook Machine Learning, T. Mitchell, McGraw Hill, 1997

18 Most Probable Classication of New Instances So far we've sought the most probable hypothesis given the data D (i.e., h MAP ) Given new instance x, what is its most probable classication? h MAP (x) is not the most probable classication! Consider: Three possible hypotheses: P (h 1 jd) =:4 P (h 2 jd) =:3 P (h 3 jd) =:3 Given new instance x, h 1 (x) =+ h 2 (x) =; h 3 (x) =; What's most probable classication of x? 142 lecture slides for textbook Machine Learning, T. Mitchell, McGraw Hill, 1997

19 Bayes Optimal Classier Bayes optimal classication: Example: arg max v j 2V X h i 2H P (v j jh i )P (h i jd) therefore and P (h 1 jd) =:4 P (;jh 1 )=0 P (+jh 1 )=1 P (h 2 jd) =:3 P (;jh 2 )=1 P (+jh 2 )=0 P (h 3 jd) =:3 P (;jh 3 )=1 P (+jh 3 )=0 X h i 2H X h i 2H arg max v j 2V P (+jh i )P (h i jd) = :4 P (;jh i )P (h i jd) = :6 X h i 2H P (v j jh i )P (h i jd) = ; 143 lecture slides for textbook Machine Learning, T. Mitchell, McGraw Hill, 1997

20 Gibbs Classier Bayes optimal classier provides best result, but can be expensive if many hypotheses. Gibbs algorithm: 1. Choose one hypothesis at random, according to P (hjd) 2. Use this to classify new instance Surprising fact: Assume target concepts are drawn at random from H according to priors on H. Then: E[error Gibbs ] 2E[error BayesOptimal ] Suppose correct, uniform prior distribution over H, then Pick any hypothesis from VS, with uniform probability Its expected error no worse than twice Bayes optimal 144 lecture slides for textbook Machine Learning, T. Mitchell, McGraw Hill, 1997

21 Naive Bayes Classier Along with decision trees, neural networks, nearest nbr, one of the most practical learning methods. When to use Moderate or large training set available Attributes that describe instances are conditionally independent given classication Successful applications: Diagnosis Classifying text documents 145 lecture slides for textbook Machine Learning, T. Mitchell, McGraw Hill, 1997

22 Naive Bayes Classier Assume target function f : X! V, where each instance x described by attributes ha 1 a 2 :::a n i. Most probable value of f(x) is: v MAP v MAP = argmax v j 2V = argmax v j 2V = argmax v j 2V P (v j ja 1 a 2 :::a n ) P (a 1 a 2 :::a n jv j )P (v j ) P (a 1 a 2 :::a n ) P (a 1 a 2 :::a n jv j )P (v j ) Naive Bayes assumption: P (a 1 a 2 :::a n jv j )= Y i P (a i jv j ) which gives Naive Bayes classier: v NB = argmax v j 2V P (v j ) Y i P (a i jv j ) 146 lecture slides for textbook Machine Learning, T. Mitchell, McGraw Hill, 1997

23 Naive Bayes Algorithm Naive Bayes Learn(examples) For each targetvalue v j ^P (v j ) estimate P (v j ) For each attribute value a i of each attribute a ^P (a i jv j ) estimate P (a i jv j ) Classify New Instance(x) v NB = argmax v j 2V ^P (v j ) Y a i 2x ^P (a i jv j ) 147 lecture slides for textbook Machine Learning, T. Mitchell, McGraw Hill, 1997

24 Naive Bayes: Example Consider PlayTennis again, and new instance houtlk = sun T emp = cool Humid = high W ind = strong Want to compute: v NB = argmax v j 2V P (v j ) Y i P (a i jv j ) P (y) P (sunjy) P (cooljy) P (highjy) P (strongjy) =:005 P (n) P (sunjn) P (cooljn) P (highjn) P (strongjn) =:021! v NB = n 148 lecture slides for textbook Machine Learning, T. Mitchell, McGraw Hill, 1997

25 Naive Bayes: Subtleties 1. Conditional independence assumption is often violated P (a 1 a 2 :::a n jv j )= Y i P (a i jv j )...but it works surprisingly well anyway. Note don't need estimated posteriors ^P (vj jx) tobe correct need only that argmax v j 2V ^P (v j ) Y i ^P (a i jv j ) = argmax v j 2V P (v j )P (a 1 ::: a n jv j ) see [Domingos &Pazzani, 1996] for analysis Naive Bayes posteriors often unrealistically close to 1 or lecture slides for textbook Machine Learning, T. Mitchell, McGraw Hill, 1997

26 Naive Bayes: Subtleties 2. what if none of the training instances with target value v j have attribute value a i? Then ^P (a i jv j ) = 0, and... ^P (v j ) Y i ^P (a i jv j )=0 Typical solution is Bayesian estimate for ^P (ai jv j ) where ^P (a i jv j ) n c + mp n + m n is number of training examples for which v = v j, n c number of examples for which v = v j and a = a i p is prior estimate for ^P (a i jv j ) m is weight given to prior (i.e. number of \virtual" examples) 150 lecture slides for textbook Machine Learning, T. Mitchell, McGraw Hill, 1997

27 Learning to Classify Text Why? Learn which news articles are of interest Learn to classify web pages by topic Naive Bayes is among most eective algorithms What attributes shall we use to represent text documents?? 151 lecture slides for textbook Machine Learning, T. Mitchell, McGraw Hill, 1997

28 Learning to Classify Text Target concept Interesting? :Document!f+ ;g 1. Represent each document by vector of words one attribute per word position in document 2. Learning: Use training examples to estimate P (+) P (;) P (docj+) P (docj;) Naive Bayes conditional independence assumption P (docjv j )= length(doc) Y i=1 P (a i = w k jv j ) where P (a i = w k jv j ) is probability that word in position i is w k, given v j one more assumption: P (a i = w k jv j )=P (a m = w k jv j ) 8i m 152 lecture slides for textbook Machine Learning, T. Mitchell, McGraw Hill, 1997

29 Learn naive Bayes text(examples V ) 1. collect all words and other tokens that occur in Examples all distinct words and other tokens in Examples V ocabulary 2. calculate the required P (v j ) and P (w k jv j ) probability terms For each targetvalue v j in V do { docs j subset of Examples for which the target value is v j { P (v j ) jdocs j j jexamplesj { Text j a single document created by concatenating all members of docs j { n total number of words in Text j (counting duplicate words multiple times) { for each word w k in V ocabulary number of times word w k occurs in n k Text j P (w k jv j ) n k +1 n+jv ocabularyj 153 lecture slides for textbook Machine Learning, T. Mitchell, McGraw Hill, 1997

30 Classify naive Bayes text(doc) all word positions in Doc that contain tokens found in V ocabulary positions Return v NB,where v NB = argmax v j 2V P (v j ) Y i2positions P (a i jv j ) 154 lecture slides for textbook Machine Learning, T. Mitchell, McGraw Hill, 1997

31 Twenty NewsGroups Given 1000 training documents from each group Learn to classify new documents according to which newsgroup it came from comp.graphics comp.os.ms-windows.misc comp.sys.ibm.pc.hardware comp.sys.mac.hardware comp.windows.x alt.atheism soc.religion.christian talk.religion.misc talk.politics.mideast talk.politics.misc talk.politics.guns misc.forsale rec.autos rec.motorcycles rec.sport.baseball rec.sport.hockey sci.space sci.crypt sci.electronics sci.med Naive Bayes: 89% classication accuracy 155 lecture slides for textbook Machine Learning, T. Mitchell, McGraw Hill, 1997

32 Article from rec.sport.hockey Path: cantaloupe.srv.cs.cmu.edu!das-news.harvard.e From: (John Doe) Subject: Re: This year's biggest and worst (opinio Date: 5 Apr 93 09:53:39 GMT I can only comment on the Kings, but the most obvious candidate for pleasant surprise is Alex Zhitnik. He came highly touted as a defensive defenseman, but he's clearly much more than that. Great skater and hard shot (though wish he were more accurate). In fact, he pretty much allowed the Kings to trade away that huge defensive liability Paul Coffey. Kelly Hrudey is only the biggest disappointment if you thought he was any good to begin with. But, at best, he's only a mediocre goaltender. A better choice would be Tomas Sandstrom, though not through any fault of his own, but because some thugs in Toronto decided 156 lecture slides for textbook Machine Learning, T. Mitchell, McGraw Hill, 1997

33 Learning Curve for 20 Newsgroups News Bayes TFIDF PRTFIDF Accuracy vs. Training set size (1/3 withheld for test) 157 lecture slides for textbook Machine Learning, T. Mitchell, McGraw Hill, 1997

34 Bayesian Belief Networks Interesting because: Naive Bayes assumption of conditional independence too restrictive But it's intractable without some such assumptions... Bayesian Belief networks describe conditional independence among subsets of variables! allows combining prior knowledge about (in)dependencies among variables with observed training data (also called Bayes Nets) 158 lecture slides for textbook Machine Learning, T. Mitchell, McGraw Hill, 1997

35 Conditional Independence Denition: X is conditionally independent of Y given Z if the probability distribution governing X is independent of the value of Y given the value of Z thatis,if (8x i y j z k ) P (X = x i jy = y j Z = z k )=P (X = x i jz = z k more compactly, we write P (XjY Z)=P (XjZ) Example: T hunder is conditionally independent of Rain, given Lightning P (T hunderjrain Lightning)=P (T hunderjlightning) Naive Bayes uses cond. indep. to justify P (X Y jz) = P (XjY Z)P (Y jz) = P (XjZ)P (Y jz) 159 lecture slides for textbook Machine Learning, T. Mitchell, McGraw Hill, 1997

36 Bayesian Belief Network Storm BusTourGroup S,B S, B S,B S, B Lightning Campfire C C Campfire Thunder ForestFire Network represents a set of conditional independence assertions: Each node is asserted to be conditionally independent of its nondescendants, given its immediate predecessors. Directed acyclic graph 160 lecture slides for textbook Machine Learning, T. Mitchell, McGraw Hill, 1997

37 Bayesian Belief Network Storm BusTourGroup S,B S, B S,B S, B Lightning Campfire C C Campfire Thunder ForestFire Represents joint probability distribution over all variables e.g., P (Storm BusT ourgroup : : : F orestf ire) in general, P (y 1 ::: y n )= n Y i=1 P (y i jparents(y i )) where P arents(y i ) denotes immediate predecessors of Y i in graph so, joint distribution is fully dened by graph, plus the P (y i jparents(y i )) 161 lecture slides for textbook Machine Learning, T. Mitchell, McGraw Hill, 1997

38 Inference in Bayesian Networks How can one infer the (probabilities of) values of one or more network variables, given observed values of others? Bayes net contains all information needed for this inference If only one variable with unknown value, easy to infer it In general case, problem is NP hard In practice, can succeed in many cases Exact inference methods work well for some network structures Monte Carlo methods \simulate" the network randomly to calculate approximate solutions 162 lecture slides for textbook Machine Learning, T. Mitchell, McGraw Hill, 1997

39 Learning of Bayesian Networks Several variants of this learning task Network structure might be known or unknown Training examples might provide values of all network variables, or just some If structure known and observe all variables Then it's easy as training a Naive Bayes classier 163 lecture slides for textbook Machine Learning, T. Mitchell, McGraw Hill, 1997

40 Learning Bayes Nets Suppose structure known, variables partially observable e.g., observe ForestFire, Storm, BusTourGroup, Thunder, but not Lightning, Campre... Similar to training neural network with hidden units In fact, can learn network conditional probability tables using gradient ascent! Converge to network h that (locally) maximizes P (Djh) 164 lecture slides for textbook Machine Learning, T. Mitchell, McGraw Hill, 1997

41 Gradient Ascent for Bayes Nets Let w ijk denote one entry in the conditional probability table for variable Y i in the network w ijk = P (Y i = y ij jp arents(y i ) = the list u ik of values) e.g., if Y i = Campfire,thenu ik might be hstorm = T BusT ourgroup = F i Perform gradient ascent by repeatedly 1. update all w ijk using training data D w ijk w ijk + X d2d P h (y ij u ik jd) w ijk 2. then, renormalize the w ijk to assure P j w ijk =1 0 w ijk lecture slides for textbook Machine Learning, T. Mitchell, McGraw Hill, 1997

42 More on Learning Bayes Nets EM algorithm can also be used. Repeatedly: 1. Calculate probabilities of unobserved variables, assuming h 2. Calculate new w ijk to maximize E[ln P (Djh)] where D now includes both observed and (calculated probabilities of) unobserved variables When structure unknown... Algorithms use greedy search to add/substract edges and nodes Active research topic 166 lecture slides for textbook Machine Learning, T. Mitchell, McGraw Hill, 1997

43 Summary: Bayesian Belief Networks Combine prior knowledge with observed data Impact of prior knowledge (when correct!) is to lower the sample complexity Active research area { Extend from boolean to real-valued variables { Parameterized distributions instead of tables { Extend to rst-order instead of propositional systems { More eective inference methods { lecture slides for textbook Machine Learning, T. Mitchell, McGraw Hill, 1997

44 Expectation Maximization (EM) When to use: Data is only partially observable Unsupervised clustering (target value unobservable) Supervised learning (some instance attributes unobservable) Some uses: Train Bayesian Belief Networks Unsupervised clustering (AUTOCLASS) Learning Hidden Markov Models 168 lecture slides for textbook Machine Learning, T. Mitchell, McGraw Hill, 1997

45 Generating Data from Mixture of k Gaussians p(x) Each instance x generated by 1. Choosing one of the k Gaussians with uniform probability 2. Generating an instance at random according to that Gaussian x 169 lecture slides for textbook Machine Learning, T. Mitchell, McGraw Hill, 1997

46 EM for Estimating k Means Given: Instances from X generated by mixture of k Gaussian distributions Unknown means h 1 ::: k i of the k Gaussians Don't know which instance x i was generated by which Gaussian Determine: Maximum likelihood estimates of h 1 ::: k i Think of full description of each instance as y i = hx i z i1 z i2 i, where z ij is 1 if x i generated by jth Gaussian x i observable z ij unobservable 170 lecture slides for textbook Machine Learning, T. Mitchell, McGraw Hill, 1997

47 EM for Estimating k Means EM Algorithm: Pick random initial h = h 1 2 i, then iterate E step: Calculate the expected value E[z ij ]ofeach hidden variable z ij, assuming the current hypothesis h = h 1 2 i holds. p(x = x i j = j ) E[z ij ] = P 2n=1 p(x = x i j = n ) = e ; (x i ; j )2 P 2n=1 e ; (x i ; n) 2 M step: Calculate a new maximum likelihood hypothesis h 0 = h i, assuming the value taken on by each hidden variable z ij is its expected value E[z ij ] calculated above. Replace h = h 1 2 i by h 0 = h 0 1 0i. 2 j P m i=1 E[z ij] x i P mi=1 E[z ij ] 171 lecture slides for textbook Machine Learning, T. Mitchell, McGraw Hill, 1997

48 EM Algorithm Converges to local maximum likelihood h and provides estimates of hidden variables z ij In fact, local maximum in E[ln P (Y jh)] Y is complete (observable plus unobservable variables) data Expected value is taken over possible values of unobserved variables in Y 172 lecture slides for textbook Machine Learning, T. Mitchell, McGraw Hill, 1997

49 General EM Problem Given: Observed data X = fx 1 ::: x m g Unobserved data Z = fz 1 ::: z m g Parameterized probability distribution P (Y jh), where { Y = fy 1 ::: y m g is the full data y i = x i [ z i { h are the parameters Determine: h that (locally) maximizes E[ln P (Y jh)] Many uses: Train Bayesian belief networks Unsupervised clustering (e.g., k means) Hidden Markov Models 173 lecture slides for textbook Machine Learning, T. Mitchell, McGraw Hill, 1997

50 General EM Method Dene likelihood function Q(h 0 jh) which calculates Y = X [ Z using observed X and current parameters h to estimate Z EM Algorithm: Q(h 0 jh) E[ln P (Y jh 0 )jh X] Estimation (E) step: Calculate Q(h 0 jh) using the current hypothesis h and the observed data X to estimate the probability distribution over Y. Q(h 0 jh) E[ln P (Y jh 0 )jh X] Maximization (M) step: Replace hypothesis h by the hypothesis h 0 that maximizes this Q function. h argmax Q(h 0 jh) h lecture slides for textbook Machine Learning, T. Mitchell, McGraw Hill, 1997

Two Roles for Bayesian Methods

Two Roles for Bayesian Methods Bayesian Learning Bayes Theorem MAP, ML hypotheses MAP learners Minimum description length principle Bayes optimal classifier Naive Bayes learner Example: Learning over text data Bayesian belief networks

More information

CSCE 478/878 Lecture 6: Bayesian Learning

CSCE 478/878 Lecture 6: Bayesian Learning Bayesian Methods Not all hypotheses are created equal (even if they are all consistent with the training data) Outline CSCE 478/878 Lecture 6: Bayesian Learning Stephen D. Scott (Adapted from Tom Mitchell

More information

Notes on Machine Learning for and

Notes on Machine Learning for and Notes on Machine Learning for 16.410 and 16.413 (Notes adapted from Tom Mitchell and Andrew Moore.) Choosing Hypotheses Generally want the most probable hypothesis given the training data Maximum a posteriori

More information

Machine Learning. Bayesian Learning.

Machine Learning. Bayesian Learning. Machine Learning Bayesian Learning Prof. Dr. Martin Riedmiller AG Maschinelles Lernen und Natürlichsprachliche Systeme Institut für Informatik Technische Fakultät Albert-Ludwigs-Universität Freiburg Martin.Riedmiller@uos.de

More information

Bayesian Learning. Remark on Conditional Probabilities and Priors. Two Roles for Bayesian Methods. [Read Ch. 6] [Suggested exercises: 6.1, 6.2, 6.

Bayesian Learning. Remark on Conditional Probabilities and Priors. Two Roles for Bayesian Methods. [Read Ch. 6] [Suggested exercises: 6.1, 6.2, 6. Machine Learning Bayesian Learning Bayes Theorem Bayesian Learning [Read Ch. 6] [Suggested exercises: 6.1, 6.2, 6.6] Prof. Dr. Martin Riedmiller AG Maschinelles Lernen und Natürlichsprachliche Systeme

More information

Bayesian Learning. Two Roles for Bayesian Methods. Bayes Theorem. Choosing Hypotheses

Bayesian Learning. Two Roles for Bayesian Methods. Bayes Theorem. Choosing Hypotheses Bayesian Learning Two Roles for Bayesian Methods Probabilistic approach to inference. Quantities of interest are governed by prob. dist. and optimal decisions can be made by reasoning about these prob.

More information

Machine Learning. Bayesian Learning. Acknowledgement Slides courtesy of Martin Riedmiller

Machine Learning. Bayesian Learning. Acknowledgement Slides courtesy of Martin Riedmiller Machine Learning Bayesian Learning Dr. Joschka Boedecker AG Maschinelles Lernen und Natürlichsprachliche Systeme Institut für Informatik Technische Fakultät Albert-Ludwigs-Universität Freiburg jboedeck@informatik.uni-freiburg.de

More information

BAYESIAN LEARNING. [Read Ch. 6] [Suggested exercises: 6.1, 6.2, 6.6]

BAYESIAN LEARNING. [Read Ch. 6] [Suggested exercises: 6.1, 6.2, 6.6] 1 BAYESIAN LEARNING [Read Ch. 6] [Suggested exercises: 6.1, 6.2, 6.6] Bayes Theorem MAP, ML hypotheses, MAP learners Minimum description length principle Bayes optimal classifier, Naive Bayes learner Example:

More information

Stephen Scott.

Stephen Scott. 1 / 28 ian ian Optimal (Adapted from Ethem Alpaydin and Tom Mitchell) Naïve Nets sscott@cse.unl.edu 2 / 28 ian Optimal Naïve Nets Might have reasons (domain information) to favor some hypotheses/predictions

More information

Bayesian Learning. Examples. Conditional Probability. Two Roles for Bayesian Methods. Prior Probability and Random Variables. The Chain Rule P (B)

Bayesian Learning. Examples. Conditional Probability. Two Roles for Bayesian Methods. Prior Probability and Random Variables. The Chain Rule P (B) Examples My mood can take 2 possible values: happy, sad. The weather can take 3 possible vales: sunny, rainy, cloudy My friends know me pretty well and say that: P(Mood=happy Weather=rainy) = 0.25 P(Mood=happy

More information

Bayesian Learning. Chapter 6: Bayesian Learning. Bayes Theorem. Roles for Bayesian Methods. CS 536: Machine Learning Littman (Wu, TA)

Bayesian Learning. Chapter 6: Bayesian Learning. Bayes Theorem. Roles for Bayesian Methods. CS 536: Machine Learning Littman (Wu, TA) Bayesian Learning Chapter 6: Bayesian Learning CS 536: Machine Learning Littan (Wu, TA) [Read Ch. 6, except 6.3] [Suggested exercises: 6.1, 6.2, 6.6] Bayes Theore MAP, ML hypotheses MAP learners Miniu

More information

CSCE 478/878 Lecture 6: Bayesian Learning and Graphical Models. Stephen Scott. Introduction. Outline. Bayes Theorem. Formulas

CSCE 478/878 Lecture 6: Bayesian Learning and Graphical Models. Stephen Scott. Introduction. Outline. Bayes Theorem. Formulas ian ian ian Might have reasons (domain information) to favor some hypotheses/predictions over others a priori ian methods work with probabilities, and have two main roles: Naïve Nets (Adapted from Ethem

More information

Bayesian Learning. CSL603 - Fall 2017 Narayanan C Krishnan

Bayesian Learning. CSL603 - Fall 2017 Narayanan C Krishnan Bayesian Learning CSL603 - Fall 2017 Narayanan C Krishnan ckn@iitrpr.ac.in Outline Bayes Theorem MAP Learners Bayes optimal classifier Naïve Bayes classifier Example text classification Bayesian networks

More information

Lecture 9: Bayesian Learning

Lecture 9: Bayesian Learning Lecture 9: Bayesian Learning Cognitive Systems II - Machine Learning Part II: Special Aspects of Concept Learning Bayes Theorem, MAL / ML hypotheses, Brute-force MAP LEARNING, MDL principle, Bayes Optimal

More information

MODULE -4 BAYEIAN LEARNING

MODULE -4 BAYEIAN LEARNING MODULE -4 BAYEIAN LEARNING CONTENT Introduction Bayes theorem Bayes theorem and concept learning Maximum likelihood and Least Squared Error Hypothesis Maximum likelihood Hypotheses for predicting probabilities

More information

COMP 551 Applied Machine Learning Lecture 5: Generative models for linear classification

COMP 551 Applied Machine Learning Lecture 5: Generative models for linear classification COMP 55 Applied Machine Learning Lecture 5: Generative models for linear classification Instructor: (jpineau@cs.mcgill.ca) Class web page: www.cs.mcgill.ca/~jpineau/comp55 Unless otherwise noted, all material

More information

Bayesian Learning Features of Bayesian learning methods:

Bayesian Learning Features of Bayesian learning methods: Bayesian Learning Features of Bayesian learning methods: Each observed training example can incrementally decrease or increase the estimated probability that a hypothesis is correct. This provides a more

More information

Statistical learning. Chapter 20, Sections 1 4 1

Statistical learning. Chapter 20, Sections 1 4 1 Statistical learning Chapter 20, Sections 1 4 Chapter 20, Sections 1 4 1 Outline Bayesian learning Maximum a posteriori and maximum likelihood learning Bayes net learning ML parameter learning with complete

More information

Machine Learning (CS 567)

Machine Learning (CS 567) Machine Learning (CS 567) Time: T-Th 5:00pm - 6:20pm Location: GFS 118 Instructor: Sofus A. Macskassy (macskass@usc.edu) Office: SAL 216 Office hours: by appointment Teaching assistant: Cheol Han (cheolhan@usc.edu)

More information

Topics. Bayesian Learning. What is Bayesian Learning? Objectives for Bayesian Learning

Topics. Bayesian Learning. What is Bayesian Learning? Objectives for Bayesian Learning Topics Bayesian Learning Sattiraju Prabhakar CS898O: ML Wichita State University Objectives for Bayesian Learning Bayes Theorem and MAP Bayes Optimal Classifier Naïve Bayes Classifier An Example Classifying

More information

Co-training and Learning with Noise

Co-training and Learning with Noise Co-training and Learning with Noise Wee Sun Lee LEEWS@COMP.NUS.EDU.SG Department of Computer Science and Singapore-MIT Alliance, National University of Singapore, Singapore 117543, Republic of Singapore

More information

Uncertainty. Variables. assigns to each sentence numerical degree of belief between 0 and 1. uncertainty

Uncertainty. Variables. assigns to each sentence numerical degree of belief between 0 and 1. uncertainty Bayes Classification n Uncertainty & robability n Baye's rule n Choosing Hypotheses- Maximum a posteriori n Maximum Likelihood - Baye's concept learning n Maximum Likelihood of real valued function n Bayes

More information

Bayesian Learning. Bayesian Learning Criteria

Bayesian Learning. Bayesian Learning Criteria Bayesian Learning In Bayesian learning, we are interested in the probability of a hypothesis h given the dataset D. By Bayes theorem: P (h D) = P (D h)p (h) P (D) Other useful formulas to remember are:

More information

Probability Based Learning

Probability Based Learning Probability Based Learning Lecture 7, DD2431 Machine Learning J. Sullivan, A. Maki September 2013 Advantages of Probability Based Methods Work with sparse training data. More powerful than deterministic

More information

Naïve Bayes classification

Naïve Bayes classification Naïve Bayes classification 1 Probability theory Random variable: a variable whose possible values are numerical outcomes of a random phenomenon. Examples: A person s height, the outcome of a coin toss

More information

Algorithmisches Lernen/Machine Learning

Algorithmisches Lernen/Machine Learning Algorithmisches Lernen/Machine Learning Part 1: Stefan Wermter Introduction Connectionist Learning (e.g. Neural Networks) Decision-Trees, Genetic Algorithms Part 2: Norman Hendrich Support-Vector Machines

More information

Relationship between Least Squares Approximation and Maximum Likelihood Hypotheses

Relationship between Least Squares Approximation and Maximum Likelihood Hypotheses Relationship between Least Squares Approximation and Maximum Likelihood Hypotheses Steven Bergner, Chris Demwell Lecture notes for Cmpt 882 Machine Learning February 19, 2004 Abstract In these notes, a

More information

Introduction to Bayesian Learning. Machine Learning Fall 2018

Introduction to Bayesian Learning. Machine Learning Fall 2018 Introduction to Bayesian Learning Machine Learning Fall 2018 1 What we have seen so far What does it mean to learn? Mistake-driven learning Learning by counting (and bounding) number of mistakes PAC learnability

More information

Outline. Training Examples for EnjoySport. 2 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997

Outline. Training Examples for EnjoySport. 2 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997 Outline Training Examples for EnjoySport Learning from examples General-to-specific ordering over hypotheses [read Chapter 2] [suggested exercises 2.2, 2.3, 2.4, 2.6] Version spaces and candidate elimination

More information

Naïve Bayes classification. p ij 11/15/16. Probability theory. Probability theory. Probability theory. X P (X = x i )=1 i. Marginal Probability

Naïve Bayes classification. p ij 11/15/16. Probability theory. Probability theory. Probability theory. X P (X = x i )=1 i. Marginal Probability Probability theory Naïve Bayes classification Random variable: a variable whose possible values are numerical outcomes of a random phenomenon. s: A person s height, the outcome of a coin toss Distinguish

More information

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2016

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2016 Bayesian Networks: Construction, Inference, Learning and Causal Interpretation Volker Tresp Summer 2016 1 Introduction So far we were mostly concerned with supervised learning: we predicted one or several

More information

Bayesian Learning Extension

Bayesian Learning Extension Bayesian Learning Extension This document will go over one of the most useful forms of statistical inference known as Baye s Rule several of the concepts that extend from it. Named after Thomas Bayes this

More information

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2014

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2014 Bayesian Networks: Construction, Inference, Learning and Causal Interpretation Volker Tresp Summer 2014 1 Introduction So far we were mostly concerned with supervised learning: we predicted one or several

More information

Introduction to ML. Two examples of Learners: Naïve Bayesian Classifiers Decision Trees

Introduction to ML. Two examples of Learners: Naïve Bayesian Classifiers Decision Trees Introduction to ML Two examples of Learners: Naïve Bayesian Classifiers Decision Trees Why Bayesian learning? Probabilistic learning: Calculate explicit probabilities for hypothesis, among the most practical

More information

Bayesian Inference. Definitions from Probability: Naive Bayes Classifiers: Advantages and Disadvantages of Naive Bayes Classifiers:

Bayesian Inference. Definitions from Probability: Naive Bayes Classifiers: Advantages and Disadvantages of Naive Bayes Classifiers: Bayesian Inference The purpose of this document is to review belief networks and naive Bayes classifiers. Definitions from Probability: Belief networks: Naive Bayes Classifiers: Advantages and Disadvantages

More information

CMPT Machine Learning. Bayesian Learning Lecture Scribe for Week 4 Jan 30th & Feb 4th

CMPT Machine Learning. Bayesian Learning Lecture Scribe for Week 4 Jan 30th & Feb 4th CMPT 882 - Machine Learning Bayesian Learning Lecture Scribe for Week 4 Jan 30th & Feb 4th Stephen Fagan sfagan@sfu.ca Overview: Introduction - Who was Bayes? - Bayesian Statistics Versus Classical Statistics

More information

Statistical Learning. Philipp Koehn. 10 November 2015

Statistical Learning. Philipp Koehn. 10 November 2015 Statistical Learning Philipp Koehn 10 November 2015 Outline 1 Learning agents Inductive learning Decision tree learning Measuring learning performance Bayesian learning Maximum a posteriori and maximum

More information

Probabilistic classification CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2016

Probabilistic classification CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2016 Probabilistic classification CE-717: Machine Learning Sharif University of Technology M. Soleymani Fall 2016 Topics Probabilistic approach Bayes decision theory Generative models Gaussian Bayes classifier

More information

Bayesian Learning. Artificial Intelligence Programming. 15-0: Learning vs. Deduction

Bayesian Learning. Artificial Intelligence Programming. 15-0: Learning vs. Deduction 15-0: Learning vs. Deduction Artificial Intelligence Programming Bayesian Learning Chris Brooks Department of Computer Science University of San Francisco So far, we ve seen two types of reasoning: Deductive

More information

Learning with Probabilities

Learning with Probabilities Learning with Probabilities CS194-10 Fall 2011 Lecture 15 CS194-10 Fall 2011 Lecture 15 1 Outline Bayesian learning eliminates arbitrary loss functions and regularizers facilitates incorporation of prior

More information

9/12/17. Types of learning. Modeling data. Supervised learning: Classification. Supervised learning: Regression. Unsupervised learning: Clustering

9/12/17. Types of learning. Modeling data. Supervised learning: Classification. Supervised learning: Regression. Unsupervised learning: Clustering Types of learning Modeling data Supervised: we know input and targets Goal is to learn a model that, given input data, accurately predicts target data Unsupervised: we know the input only and want to make

More information

COMP 328: Machine Learning

COMP 328: Machine Learning COMP 328: Machine Learning Lecture 2: Naive Bayes Classifiers Nevin L. Zhang Department of Computer Science and Engineering The Hong Kong University of Science and Technology Spring 2010 Nevin L. Zhang

More information

CS6220: DATA MINING TECHNIQUES

CS6220: DATA MINING TECHNIQUES CS6220: DATA MINING TECHNIQUES Chapter 8&9: Classification: Part 3 Instructor: Yizhou Sun yzsun@ccs.neu.edu March 12, 2013 Midterm Report Grade Distribution 90-100 10 80-89 16 70-79 8 60-69 4

More information

Outline. [read Chapter 2] Learning from examples. General-to-specic ordering over hypotheses. Version spaces and candidate elimination.

Outline. [read Chapter 2] Learning from examples. General-to-specic ordering over hypotheses. Version spaces and candidate elimination. Outline [read Chapter 2] [suggested exercises 2.2, 2.3, 2.4, 2.6] Learning from examples General-to-specic ordering over hypotheses Version spaces and candidate elimination algorithm Picking new examples

More information

Logistic Regression. Jia-Bin Huang. Virginia Tech Spring 2019 ECE-5424G / CS-5824

Logistic Regression. Jia-Bin Huang. Virginia Tech Spring 2019 ECE-5424G / CS-5824 Logistic Regression Jia-Bin Huang ECE-5424G / CS-5824 Virginia Tech Spring 2019 Administrative Please start HW 1 early! Questions are welcome! Two principles for estimating parameters Maximum Likelihood

More information

Notes on Machine Learning for and

Notes on Machine Learning for and Notes on Machine Learning for 16.410 and 16.413 (Notes adapted from Tom Mitchell and Andrew Moore.) Learning = improving with experience Improve over task T (e.g, Classification, control tasks) with respect

More information

CS6220: DATA MINING TECHNIQUES

CS6220: DATA MINING TECHNIQUES CS6220: DATA MINING TECHNIQUES Matrix Data: Classification: Part 2 Instructor: Yizhou Sun yzsun@ccs.neu.edu September 21, 2014 Methods to Learn Matrix Data Set Data Sequence Data Time Series Graph & Network

More information

[read Chapter 2] [suggested exercises 2.2, 2.3, 2.4, 2.6] General-to-specific ordering over hypotheses

[read Chapter 2] [suggested exercises 2.2, 2.3, 2.4, 2.6] General-to-specific ordering over hypotheses 1 CONCEPT LEARNING AND THE GENERAL-TO-SPECIFIC ORDERING [read Chapter 2] [suggested exercises 2.2, 2.3, 2.4, 2.6] Learning from examples General-to-specific ordering over hypotheses Version spaces and

More information

Midterm Review CS 6375: Machine Learning. Vibhav Gogate The University of Texas at Dallas

Midterm Review CS 6375: Machine Learning. Vibhav Gogate The University of Texas at Dallas Midterm Review CS 6375: Machine Learning Vibhav Gogate The University of Texas at Dallas Machine Learning Supervised Learning Unsupervised Learning Reinforcement Learning Parametric Y Continuous Non-parametric

More information

Statistical learning. Chapter 20, Sections 1 3 1

Statistical learning. Chapter 20, Sections 1 3 1 Statistical learning Chapter 20, Sections 1 3 Chapter 20, Sections 1 3 1 Outline Bayesian learning Maximum a posteriori and maximum likelihood learning Bayes net learning ML parameter learning with complete

More information

Mathematical Formulation of Our Example

Mathematical Formulation of Our Example Mathematical Formulation of Our Example We define two binary random variables: open and, where is light on or light off. Our question is: What is? Computer Vision 1 Combining Evidence Suppose our robot

More information

Question of the Day. Machine Learning 2D1431. Decision Tree for PlayTennis. Outline. Lecture 4: Decision Tree Learning

Question of the Day. Machine Learning 2D1431. Decision Tree for PlayTennis. Outline. Lecture 4: Decision Tree Learning Question of the Day Machine Learning 2D1431 How can you make the following equation true by drawing only one straight line? 5 + 5 + 5 = 550 Lecture 4: Decision Tree Learning Outline Decision Tree for PlayTennis

More information

Expectation Maximization, and Learning from Partly Unobserved Data (part 2)

Expectation Maximization, and Learning from Partly Unobserved Data (part 2) Expectation Maximization, and Learning from Partly Unobserved Data (part 2) Machine Learning 10-701 April 2005 Tom M. Mitchell Carnegie Mellon University Clustering Outline K means EM: Mixture of Gaussians

More information

Introduction to Bayesian Learning

Introduction to Bayesian Learning Course Information Introduction Introduction to Bayesian Learning Davide Bacciu Dipartimento di Informatica Università di Pisa bacciu@di.unipi.it Apprendimento Automatico: Fondamenti - A.A. 2016/2017 Outline

More information

Introduction to Machine Learning. Maximum Likelihood and Bayesian Inference. Lecturers: Eran Halperin, Lior Wolf

Introduction to Machine Learning. Maximum Likelihood and Bayesian Inference. Lecturers: Eran Halperin, Lior Wolf 1 Introduction to Machine Learning Maximum Likelihood and Bayesian Inference Lecturers: Eran Halperin, Lior Wolf 2014-15 We know that X ~ B(n,p), but we do not know p. We get a random sample from X, a

More information

Naïve Bayes. Jia-Bin Huang. Virginia Tech Spring 2019 ECE-5424G / CS-5824

Naïve Bayes. Jia-Bin Huang. Virginia Tech Spring 2019 ECE-5424G / CS-5824 Naïve Bayes Jia-Bin Huang ECE-5424G / CS-5824 Virginia Tech Spring 2019 Administrative HW 1 out today. Please start early! Office hours Chen: Wed 4pm-5pm Shih-Yang: Fri 3pm-4pm Location: Whittemore 266

More information

BITS F464: MACHINE LEARNING

BITS F464: MACHINE LEARNING BITS F464: MACHINE LEARNING Lecture-09: Concept Learning Dr. Kamlesh Tiwari Assistant Professor Department of Computer Science and Information Systems Engineering, BITS Pilani, Rajasthan-333031 INDIA Jan

More information

Ruslan Salakhutdinov Joint work with Geoff Hinton. University of Toronto, Machine Learning Group

Ruslan Salakhutdinov Joint work with Geoff Hinton. University of Toronto, Machine Learning Group NON-LINEAR DIMENSIONALITY REDUCTION USING NEURAL NETORKS Ruslan Salakhutdinov Joint work with Geoff Hinton University of Toronto, Machine Learning Group Overview Document Retrieval Present layer-by-layer

More information

Machine Learning, Fall 2012 Homework 2

Machine Learning, Fall 2012 Homework 2 0-60 Machine Learning, Fall 202 Homework 2 Instructors: Tom Mitchell, Ziv Bar-Joseph TA in charge: Selen Uguroglu email: sugurogl@cs.cmu.edu SOLUTIONS Naive Bayes, 20 points Problem. Basic concepts, 0

More information

From inductive inference to machine learning

From inductive inference to machine learning From inductive inference to machine learning ADAPTED FROM AIMA SLIDES Russel&Norvig:Artificial Intelligence: a modern approach AIMA: Inductive inference AIMA: Inductive inference 1 Outline Bayesian inferences

More information

Lecture 24: Other (Non-linear) Classifiers: Decision Tree Learning, Boosting, and Support Vector Classification Instructor: Prof. Ganesh Ramakrishnan

Lecture 24: Other (Non-linear) Classifiers: Decision Tree Learning, Boosting, and Support Vector Classification Instructor: Prof. Ganesh Ramakrishnan Lecture 24: Other (Non-linear) Classifiers: Decision Tree Learning, Boosting, and Support Vector Classification Instructor: Prof Ganesh Ramakrishnan October 20, 2016 1 / 25 Decision Trees: Cascade of step

More information

Midterm Review CS 7301: Advanced Machine Learning. Vibhav Gogate The University of Texas at Dallas

Midterm Review CS 7301: Advanced Machine Learning. Vibhav Gogate The University of Texas at Dallas Midterm Review CS 7301: Advanced Machine Learning Vibhav Gogate The University of Texas at Dallas Supervised Learning Issues in supervised learning What makes learning hard Point Estimation: MLE vs Bayesian

More information

Concept Learning through General-to-Specific Ordering

Concept Learning through General-to-Specific Ordering 0. Concept Learning through General-to-Specific Ordering Based on Machine Learning, T. Mitchell, McGRAW Hill, 1997, ch. 2 Acknowledgement: The present slides are an adaptation of slides drawn by T. Mitchell

More information

Lecture : Probabilistic Machine Learning

Lecture : Probabilistic Machine Learning Lecture : Probabilistic Machine Learning Riashat Islam Reasoning and Learning Lab McGill University September 11, 2018 ML : Many Methods with Many Links Modelling Views of Machine Learning Machine Learning

More information

Lecture 3: Decision Trees

Lecture 3: Decision Trees Lecture 3: Decision Trees Cognitive Systems - Machine Learning Part I: Basic Approaches of Concept Learning ID3, Information Gain, Overfitting, Pruning last change November 26, 2014 Ute Schmid (CogSys,

More information

Decision Tree Learning

Decision Tree Learning 0. Decision Tree Learning Based on Machine Learning, T. Mitchell, McGRAW Hill, 1997, ch. 3 Acknowledgement: The present slides are an adaptation of slides drawn by T. Mitchell PLAN 1. Concept learning:

More information

The Naïve Bayes Classifier. Machine Learning Fall 2017

The Naïve Bayes Classifier. Machine Learning Fall 2017 The Naïve Bayes Classifier Machine Learning Fall 2017 1 Today s lecture The naïve Bayes Classifier Learning the naïve Bayes Classifier Practical concerns 2 Today s lecture The naïve Bayes Classifier Learning

More information

Machine Learning, Midterm Exam: Spring 2009 SOLUTION

Machine Learning, Midterm Exam: Spring 2009 SOLUTION 10-601 Machine Learning, Midterm Exam: Spring 2009 SOLUTION March 4, 2009 Please put your name at the top of the table below. If you need more room to work out your answer to a question, use the back of

More information

Decision Tree Learning

Decision Tree Learning Decision Tree Learning Berlin Chen Department of Computer Science & Information Engineering National Taiwan Normal University References: 1. Machine Learning, Chapter 3 2. Data Mining: Concepts, Models,

More information

Bayesian Networks BY: MOHAMAD ALSABBAGH

Bayesian Networks BY: MOHAMAD ALSABBAGH Bayesian Networks BY: MOHAMAD ALSABBAGH Outlines Introduction Bayes Rule Bayesian Networks (BN) Representation Size of a Bayesian Network Inference via BN BN Learning Dynamic BN Introduction Conditional

More information

Classification CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2012

Classification CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2012 Classification CE-717: Machine Learning Sharif University of Technology M. Soleymani Fall 2012 Topics Discriminant functions Logistic regression Perceptron Generative models Generative vs. discriminative

More information

Lecture 3: Decision Trees

Lecture 3: Decision Trees Lecture 3: Decision Trees Cognitive Systems II - Machine Learning SS 2005 Part I: Basic Approaches of Concept Learning ID3, Information Gain, Overfitting, Pruning Lecture 3: Decision Trees p. Decision

More information

Bayesian Machine Learning

Bayesian Machine Learning Bayesian Machine Learning Andrew Gordon Wilson ORIE 6741 Lecture 4 Occam s Razor, Model Construction, and Directed Graphical Models https://people.orie.cornell.edu/andrew/orie6741 Cornell University September

More information

Last Time. Today. Bayesian Learning. The Distributions We Love. CSE 446 Gaussian Naïve Bayes & Logistic Regression

Last Time. Today. Bayesian Learning. The Distributions We Love. CSE 446 Gaussian Naïve Bayes & Logistic Regression CSE 446 Gaussian Naïve Bayes & Logistic Regression Winter 22 Dan Weld Learning Gaussians Naïve Bayes Last Time Gaussians Naïve Bayes Logistic Regression Today Some slides from Carlos Guestrin, Luke Zettlemoyer

More information

LEARNING AND REASONING ON BACKGROUND NETS FOR TEXT CATEGORIZATION WITH CHANGING DOMAIN AND PERSONALIZED CRITERIA

LEARNING AND REASONING ON BACKGROUND NETS FOR TEXT CATEGORIZATION WITH CHANGING DOMAIN AND PERSONALIZED CRITERIA International Journal of Innovative Computing, Information and Control ICIC International c 2013 ISSN 1349-4198 Volume 9, Number 1, January 2013 pp. 47 67 LEARNING AND REASONING ON BACKGROUND NETS FOR

More information

Machine Learning. Bayesian Learning. Michael M. Richter. Michael M. Richter

Machine Learning. Bayesian Learning. Michael M. Richter.   Michael M. Richter Machine Learning Bayesian Learning Email: mrichter@ucalgary.ca Topic This is concept learning the probabilistic way. That means, everything that is stated is done in an exact way but not always true. That

More information

Bayesian Networks Inference with Probabilistic Graphical Models

Bayesian Networks Inference with Probabilistic Graphical Models 4190.408 2016-Spring Bayesian Networks Inference with Probabilistic Graphical Models Byoung-Tak Zhang intelligence Lab Seoul National University 4190.408 Artificial (2016-Spring) 1 Machine Learning? Learning

More information

Final Exam, Fall 2002

Final Exam, Fall 2002 15-781 Final Exam, Fall 22 1. Write your name and your andrew email address below. Name: Andrew ID: 2. There should be 17 pages in this exam (excluding this cover sheet). 3. If you need more room to work

More information

Learning Decision Trees

Learning Decision Trees Learning Decision Trees Machine Learning Fall 2018 Some slides from Tom Mitchell, Dan Roth and others 1 Key issues in machine learning Modeling How to formulate your problem as a machine learning problem?

More information

Introduction to Machine Learning

Introduction to Machine Learning Introduction to Machine Learning CS4375 --- Fall 2018 Bayesian a Learning Reading: Sections 13.1-13.6, 20.1-20.2, R&N Sections 6.1-6.3, 6.7, 6.9, Mitchell 1 Uncertainty Most real-world problems deal with

More information

CSCI-567: Machine Learning (Spring 2019)

CSCI-567: Machine Learning (Spring 2019) CSCI-567: Machine Learning (Spring 2019) Prof. Victor Adamchik U of Southern California Mar. 19, 2019 March 19, 2019 1 / 43 Administration March 19, 2019 2 / 43 Administration TA3 is due this week March

More information

Chapter 3: Decision Tree Learning

Chapter 3: Decision Tree Learning Chapter 3: Decision Tree Learning CS 536: Machine Learning Littman (Wu, TA) Administration Books? New web page: http://www.cs.rutgers.edu/~mlittman/courses/ml03/ schedule lecture notes assignment info.

More information

Generative Clustering, Topic Modeling, & Bayesian Inference

Generative Clustering, Topic Modeling, & Bayesian Inference Generative Clustering, Topic Modeling, & Bayesian Inference INFO-4604, Applied Machine Learning University of Colorado Boulder December 12-14, 2017 Prof. Michael Paul Unsupervised Naïve Bayes Last week

More information

Statistical learning. Chapter 20, Sections 1 3 1

Statistical learning. Chapter 20, Sections 1 3 1 Statistical learning Chapter 20, Sections 1 3 Chapter 20, Sections 1 3 1 Outline Bayesian learning Maximum a posteriori and maximum likelihood learning Bayes net learning ML parameter learning with complete

More information

Lecture 2: Foundations of Concept Learning

Lecture 2: Foundations of Concept Learning Lecture 2: Foundations of Concept Learning Cognitive Systems II - Machine Learning WS 2005/2006 Part I: Basic Approaches to Concept Learning Version Space, Candidate Elimination, Inductive Bias Lecture

More information

Bayesian Networks. Motivation

Bayesian Networks. Motivation Bayesian Networks Computer Sciences 760 Spring 2014 http://pages.cs.wisc.edu/~dpage/cs760/ Motivation Assume we have five Boolean variables,,,, The joint probability is,,,, How many state configurations

More information

Introduction to Machine Learning

Introduction to Machine Learning Uncertainty Introduction to Machine Learning CS4375 --- Fall 2018 a Bayesian Learning Reading: Sections 13.1-13.6, 20.1-20.2, R&N Sections 6.1-6.3, 6.7, 6.9, Mitchell Most real-world problems deal with

More information

[Read Ch. 5] [Recommended exercises: 5.2, 5.3, 5.4]

[Read Ch. 5] [Recommended exercises: 5.2, 5.3, 5.4] Evaluating Hypotheses [Read Ch. 5] [Recommended exercises: 5.2, 5.3, 5.4] Sample error, true error Condence intervals for observed hypothesis error Estimators Binomial distribution, Normal distribution,

More information

Introduction to Machine Learning. Introduction to ML - TAU 2016/7 1

Introduction to Machine Learning. Introduction to ML - TAU 2016/7 1 Introduction to Machine Learning Introduction to ML - TAU 2016/7 1 Course Administration Lecturers: Amir Globerson (gamir@post.tau.ac.il) Yishay Mansour (Mansour@tau.ac.il) Teaching Assistance: Regev Schweiger

More information

Lecture 16 Deep Neural Generative Models

Lecture 16 Deep Neural Generative Models Lecture 16 Deep Neural Generative Models CMSC 35246: Deep Learning Shubhendu Trivedi & Risi Kondor University of Chicago May 22, 2017 Approach so far: We have considered simple models and then constructed

More information

MIDTERM: CS 6375 INSTRUCTOR: VIBHAV GOGATE October,

MIDTERM: CS 6375 INSTRUCTOR: VIBHAV GOGATE October, MIDTERM: CS 6375 INSTRUCTOR: VIBHAV GOGATE October, 23 2013 The exam is closed book. You are allowed a one-page cheat sheet. Answer the questions in the spaces provided on the question sheets. If you run

More information

CS 446 Machine Learning Fall 2016 Nov 01, Bayesian Learning

CS 446 Machine Learning Fall 2016 Nov 01, Bayesian Learning CS 446 Machine Learning Fall 206 Nov 0, 206 Bayesian Learning Professor: Dan Roth Scribe: Ben Zhou, C. Cervantes Overview Bayesian Learning Naive Bayes Logistic Regression Bayesian Learning So far, we

More information

arxiv: v1 [cs.lg] 5 Jul 2010

arxiv: v1 [cs.lg] 5 Jul 2010 The Latent Bernoulli-Gauss Model for Data Analysis Amnon Shashua Gabi Pragier School of Computer Science and Engineering Hebrew University of Jerusalem arxiv:1007.0660v1 [cs.lg] 5 Jul 2010 Abstract We

More information

Parametric Unsupervised Learning Expectation Maximization (EM) Lecture 20.a

Parametric Unsupervised Learning Expectation Maximization (EM) Lecture 20.a Parametric Unsupervised Learning Expectation Maximization (EM) Lecture 20.a Some slides are due to Christopher Bishop Limitations of K-means Hard assignments of data points to clusters small shift of a

More information

Bayesian Methods: Naïve Bayes

Bayesian Methods: Naïve Bayes Bayesian Methods: aïve Bayes icholas Ruozzi University of Texas at Dallas based on the slides of Vibhav Gogate Last Time Parameter learning Learning the parameter of a simple coin flipping model Prior

More information

Markov Networks. l Like Bayes Nets. l Graphical model that describes joint probability distribution using tables (AKA potentials)

Markov Networks. l Like Bayes Nets. l Graphical model that describes joint probability distribution using tables (AKA potentials) Markov Networks l Like Bayes Nets l Graphical model that describes joint probability distribution using tables (AKA potentials) l Nodes are random variables l Labels are outcomes over the variables Markov

More information

Machine Learning

Machine Learning Machine Learning 10-701 Tom M. Mitchell Machine Learning Department Carnegie Mellon University February 1, 2011 Today: Generative discriminative classifiers Linear regression Decomposition of error into

More information

Decision Trees.

Decision Trees. . Machine Learning Decision Trees Prof. Dr. Martin Riedmiller AG Maschinelles Lernen und Natürlichsprachliche Systeme Institut für Informatik Technische Fakultät Albert-Ludwigs-Universität Freiburg riedmiller@informatik.uni-freiburg.de

More information

Naïve Bayes Classifiers

Naïve Bayes Classifiers Naïve Bayes Classifiers Example: PlayTennis (6.9.1) Given a new instance, e.g. (Outlook = sunny, Temperature = cool, Humidity = high, Wind = strong ), we want to compute the most likely hypothesis: v NB

More information

CSE 473: Artificial Intelligence Autumn Topics

CSE 473: Artificial Intelligence Autumn Topics CSE 473: Artificial Intelligence Autumn 2014 Bayesian Networks Learning II Dan Weld Slides adapted from Jack Breese, Dan Klein, Daphne Koller, Stuart Russell, Andrew Moore & Luke Zettlemoyer 1 473 Topics

More information