A [somewhat] Quick Overview of Probability. Shannon Quinn CSCI 6900

Size: px
Start display at page:

Download "A [somewhat] Quick Overview of Probability. Shannon Quinn CSCI 6900"

Transcription

1 A [somewhat] Quick Overview of Probability Shannon Quinn CSCI 6900

2 [Some material pilfered from Probabilistic and Bayesian Analytics Note to other teachers and users of these slides. Andrew would be delighted if you found this source material useful in giving your own lectures. Feel free to use these slides verbatim, or to modify them to fit your own needs. PowerPoint originals are available. If you make use of a significant portion of these slides in your own lecture, please include this message, or the following link to the source repository of Andrew s tutorials: tutorials. Comments and corrections gratefully received. Andrew W. Moore School of Computer Science Carnegie Mellon University awm@cs.cmu.edu Copyright Andrew W. Moore

3 Probability - what you need to really, really know Probabilities are cool

4 Probability - what you need to really, really know Probabilities are cool Random variables and events

5 Discrete Random Variables A is a Boolean-valued random variable if A denotes an event, there is uncertainty as to whether A occurs. Examples A = The US president in 2023 will be male A = You wake up tomorrow with a headache A = You have Ebola A = the 1,000,000,000,000 th digit of π is 7 Define P(A) as the fraction of possible worlds in which A is true We re assuming all possible worlds are equally probable

6 Discrete Random Variables A is a Boolean-valued random variable if A denotes an event, a possible outcome of an experiment there is uncertainty as to whether A occurs. the experiment is not deterministic Define P(A) as the fraction of experiments in which A is true We re assuming all possible outcomes are equiprobable Examples You roll two 6-sided die (the experiment) and get doubles (A=doubles, the outcome) I pick two students in the class (the experiment) and they have the same birthday (A=same birthday, the outcome)

7 Visualizing A Event space of all possible worlds Worlds in which A is true P(A) = Area of reddish oval Its area is 1 Worlds in which A is False

8 Probability - what you need to really, really know Probabilities are cool Random variables and events There is One True Way to talk about uncertainty: the Axioms of Probability

9 The Axioms of Probability 0 <= P(A) <= 1 P(True) = 1 P(False) = 0 P(A or B) = P(A) + P(B) - P(A and B) Dice Experiments Events, random variables,., probabilities

10 (This is Andrew s joke)

11 Interpreting the axioms 0 <= P(A) <= 1 P(True) = 1 P(False) = 0 P(A or B) = P(A) + P(B) - P(A and B) The area of A can t get any smaller than 0 And a zero area would mean no world could ever have A true

12 Interpreting the axioms 0 <= P(A) <= 1 P(True) = 1 P(False) = 0 P(A or B) = P(A) + P(B) - P(A and B) The area of A can t get any bigger than 1 And an area of 1 would mean all worlds will have A true

13 Interpreting the axioms 0 <= P(A) <= 1 P(True) = 1 P(False) = 0 P(A or B) = P(A) + P(B) - P(A and B) A B

14 Interpreting the axioms 0 <= P(A) <= 1 P(True) = 1 P(False) = 0 P(A or B) = P(A) + P(B) - P(A and B) A P(A or B) B P(A and B) B Simple addition and subtraction

15 Theorems from the Axioms 0 <= P(A) <= 1, P(True) = 1, P(False) = 0 P(A or B) = P(A) + P(B) - P(A and B) è P(not A) = P(~A) = 1-P(A) P(A or ~A) = 1 P(A and ~A) = 0 P(A or ~A) = P(A) + P(~A) - P(A and ~A) 1 = P(A) + P(~A) - 0

16 Elementary Probability in Pictures P(~A) + P(A) = 1 A ~A

17 Another important theorem 0 <= P(A) <= 1, P(True) = 1, P(False) = 0 P(A or B) = P(A) + P(B) - P(A and B) è P(A) = P(A ^ B) + P(A ^ ~B) A = A and (B or ~B) = (A and B) or (A and ~B) P(A) = P(A and B) + P(A and ~B) P((A and B) and (A and ~B)) P(A) = P(A and B) + P(A and ~B) P(A and A and B and ~B)

18 Elementary Probability in Pictures P(A) = P(A ^ B) + P(A ^ ~B) A ^ B B A ^ ~B ~B

19 Probability - what you need to really, really know Probabilities are cool Random variables and events The Axioms of Probability Independence

20 Independent Events Definition: two events A and B are independent if Pr(A and B)=Pr(A)*Pr(B). Intuition: outcome of A has no effect on the outcome of B (and vice versa). We need to assume the different rolls are independent to solve the problem. You frequently need to assume the independence of something to solve any learning problem.

21 Some practical problems You re the DM in a D&D game. Joe brings his own d20 and throws 4 critical hits in a row to start off DM=dungeon master D20 = 20-sided die Critical hit = 19 or 20 What are the odds of that happening with a fair die? Ci=critical hit on trial i, i=1,2,3,4 P(C1 and C2 and C4) = P(C1)* *P(C4) = (1/10)^4

22 Multivalued Discrete Random Variables Suppose A can take on more than 2 values A is a random variable with arity k if it can take on exactly one value out of {v 1,v 2,.. v k } Example: V={aaliyah, aardvark,., zymurge, zynga} Example: V={aaliyah_aardvark,, zynga_zymgurgy} Thus P( A = P v A = v ) = 0 if i j i j ( A = 2 k v1 A = v A = v ) = 1

23 Terms: Binomials and Multinomials Suppose A can take on more than 2 values A is a random variable with arity k if it can take on exactly one value out of {v 1,v 2,.. v k } Example: V={aaliyah, aardvark,., zymurge, zynga} Example: V={aaliyah_aardvark,, zynga_zymgurgy} The distribution Pr(A) is a multinomial For k=2 the distribution is a binomial

24 More about Multivalued Random Variables Using the axioms of probability and assuming that A obeys It s easy to prove that j i v A v A P j i = = = 0 if ) ( 1 ) ( 2 1 = = = = k v A v A v A P ) ( ) ( = = = = = = i j i v j A P v A v A v A P And thus we can prove 1 ) ( 1 = = = k j v j A P

25 Elementary Probability in Pictures k j= 1 P( A = v j ) = 1 A=2 A=3 A=5 A=4 A=1

26 Elementary Probability in Pictures k j= 1 P( A = v j ) = 1 A=aaliyah A= A=. A=zynga A=aardvark

27 Probability - what you need to really, really know Probabilities are cool Random variables and events The Axioms of Probability Independence, binomials, multinomials Conditional probabilities

28 A practical problem I have lots of standard d20 die, lots of loaded die, all identical. Loaded die will give a 19/20 ( critical hit ) half the time. In the game, someone hands me a random die, which is fair (A) or loaded (~A), with P(A) depending on how I mix the die. Then I roll, and either get a critical hit (B) or not (~B). Can I mix the dice together so that P(B) is anything I want - say, p(b)= 0.137? P(B) = P(B and A) + P(B and ~A) = 0.1*λ + 0.5*(1- λ) = mixture model λ = ( )/0.4 =

29 Another picture for this problem It s more convenient to say if you ve picked a fair die then i.e. Pr(critical hit fair die)=0.1 if you ve picked the loaded die then. Pr(critical hit loaded die)=0.5 A (fair die) ~A (loaded) Conditional probability: Pr(B A) = P(B^A)/P(A) A and B ~A and B P(B A) P(B ~A)

30 Definition of Conditional Probability P(A ^ B) P(A B) = P(B) Corollary: The Chain Rule P(A ^ B) = P(A B) P(B)

31 Some practical problems marginalizing out A I have 3 standard d20 dice, 1 loaded die. Experiment: (1) pick a d20 uniformly at random then (2) roll it. Let A=d20 picked is fair and B=roll 19 or 20 with that die. What is P(B)? P(B) = P(B A) P(A) + P(B ~A) P(~A) = 0.1* *0.25 = 0.2

32 posterior P(A B) = P(B A) * P(A) P(B) prior Bayes rule P(B A) = P(A B) * P(B) P(A) Bayes, Thomas (1763) An essay towards solving a problem in the doctrine of chances. Philosophical Transactions of the Royal Society of London, 53: by no means merely a curious speculation in the doctrine of chances, but necessary to be solved in order to a sure foundation for all our reasonings concerning past facts, and what is likely to be hereafter. necessary to be considered by any that would give a clear account of the strength of analogical or inductive reasoning

33 Probability - what you need to really, really know Probabilities are cool Random variables and events The Axioms of Probability Independence, binomials, multinomials Conditional probabilities Bayes Rule

34 Some practical problems Joe throws 4 critical hits in a row, is Joe cheating? A = Joe using cheater s die C = roll 19 or 20; P(C A)=0.5, P(C ~A)=0.1 B = C1 and C2 and C3 and C4 Pr(B A) = P(B ~A)= P( A B) = P( B P( B A) P( A) A) P( A) + P( B ~ A) P(~ A) P( A B) = * P( A) * P( A) *(1 P( A))

35 What s the experiment and outcome here? Outcome A: Joe is cheating Experiment: Joe picked a die uniformly at random from a bag containing 10,000 fair die and one bad one. Joe is a D&D player picked uniformly at random from set of 1,000,000 people and n of them cheat with probability p>0. I have no idea, but I don t like his looks. Call it P(A)=0.1

36 Some practical problems Joe throws 4 critical hits in a row, is Joe cheating? A = Joe using cheater s die C = roll 19 or 20; P(C A)=0.5, P(C ~A)=0.1 B = C1 and C2 and C3 and C4 Pr(B A) = P(B ~A)= ) ( ) / ( ) ( ) ( ) / ( ) ( ) ( ) ( B P A P A B P B P A P A B P B A P A B P = ) ( ) ( ) ( ) ( B P A P A B P A B P = ) ( ) ( ) ( ) ( A P A P A B P A B P = ) ( ) ( A P A P = ) ( ) ( 6,250 A P A P = Moral: with enough evidence the prior P(A) doesn t really matter.

37 Probability - what you need to really, really know Probabilities are cool Random variables and events The Axioms of Probability Independence, binomials, multinomials Conditional probabilities Bayes Rule MLE s, smoothing, and MAPs

38 Some practical problems I bought a loaded d20 on EBay but it didn t come with any specs. How can I find out how it behaves? Frequency Face Shown 1. Collect some data (20 rolls) 2. Estimate Pr(i)=C(rolls of i)/c(any roll)

39 One solution I bought a loaded d20 on EBay but it didn t come with any specs. How can I find out how it behaves? Frequency P(1)=0 P(2)=0 P(3)=0 P(4)=0.1 MLE = maximum likelihood estimate Face Shown P(19)=0.25 P(20)=0.2 But: Do I really think it s impossible to roll a 1,2 or 3? Would you bet your house on it?

40 A better solution I bought a loaded d20 on EBay but it didn t come with any specs. How can I find out how it behaves? Frequency Face Shown 0. Imagine some data (20 rolls, each i shows up 1x) 1. Collect some data (20 rolls) 2. Estimate Pr(i)=C(rolls of i)/c(any roll)

41 A better solution I bought a loaded d20 on EBay but it didn t come with any specs. How can I find out how it behaves? Frequency Face Shown P(1)=1/40 P(2)=1/40 P(3)=1/40 P(4)=(2+1)/40 P(19)=(5+1)/40 P(20)=(4+1)/40=1/8 Pˆr( i) C( i) + 1 C( ANY ) + C( IMAGINED) = 0.25 vs really different! Maybe I should imagine less data?

42 A better solution? Frequency Face Shown P(1)=1/40 P(2)=1/40 P(3)=1/40 P(4)=(2+1)/40 P(19)=(5+1)/40 P(20)=(4+1)/40=1/8 Pˆr( i) C( i) + 1 C( ANY ) + C( IMAGINED) = 0.25 vs really different! Maybe I should imagine less data?

43 A better solution? Q: What if I used m rolls with a probability of q=1/20 of rolling any i? Pˆr( i) = C( ANY ) C( i) C( IMAGINED) Pˆr( i) = C( i) + mq C( ANY ) + m I can use this formula with m>20, or even with m<20 say with m=1

44 A better solution Q: What if I used m rolls with a probability of q=1/20 of rolling any i? Pˆr( i) = C( ANY ) C( i) C( IMAGINED) Pˆr( i) = C( i) + mq C( ANY ) + m If m>>c(any) then your imagination q rules If m<<c(any) then your data rules BUT you never ever ever end up with Pr(i)=0

45 Terminology more later This is called a uniform Dirichlet prior C(i), C(ANY) are sufficient statistics Pˆr( i) = C( i) + mq C( ANY ) + m MLE = maximum likelihood estimate MAP= maximum a posteriori estimate

46 Probability - what you need to really, really know Probabilities are cool Random variables and events The Axioms of Probability Independence, binomials, multinomials Conditional probabilities Bayes Rule MLE s, smoothing, and MAPs The joint distribution

47 Some practical problems I have 1 standard d6 die, 2 loaded d6 die. Loaded high: P(X=6)=0.50 Loaded low: P(X=1)=0.50 Experiment: pick one d6 uniformly at random (A) and roll it. What is more likely rolling a seven or rolling doubles? Three combinations: HL, HF, FL P(D) = P(D ^ A=HL) + P(D ^ A=HF) + P(D ^ A=FL) = P(D A=HL)*P(A=HL) + P(D A=HF)*P(A=HF) + P(A A=FL)*P(A=FL)

48 Some practical problems I have 1 standard d6 die, 2 loaded d6 die. Loaded high: P(X=6)=0.50 Loaded low: P(X=1)=0.50 Experiment: pick one d6 uniformly at random (A) and roll it. What is more likely rolling a seven or rolling doubles? Three combinations: HL, HF, FL Roll 2 Roll D 7 2 D 7 3 D D 5 7 D 6 7 D

49 A brute-force solution A Roll 1 Roll 2 P Comment FL 1 1 1/3 * 1/6 * ½ doubles FL 1 2 1/3 * 1/6 * 1/10 A joint probability table shows P(X1=x1 and and Xk=xk) FL for 1 every possible combination of values x1,x2,., xk 1 6 seven FL With 2 this you 1 can compute any P(A) where A is any FL boolean 2 combination of the primitive events (Xi=Xk), e.g. P(doubles) FL 6 P(seven or 6 eleven) doubles HL 1 P(total is higher 1 than 5) HL 1. 2 HF 1 1 doubles

50 The Joint Distribution Example: Boolean variables A, B, C Recipe for making a joint distribution of M variables:

51 The Joint Distribution Example: Boolean variables A, B, C Recipe for making a joint distribution of M variables: 1. Make a truth table listing all combinations of values of your variables (if there are M Boolean variables then the table will have 2 M rows). A B C

52 The Joint Distribution Example: Boolean variables A, B, C Recipe for making a joint distribution of M variables: 1. Make a truth table listing all combinations of values of your variables (if there are M Boolean variables then the table will have 2 M rows). 2. For each combination of values, say how probable it is. A B C Prob

53 The Joint Distribution Example: Boolean variables A, B, C Recipe for making a joint distribution of M variables: 1. Make a truth table listing all combinations of values of your variables (if there are M Boolean variables then the table will have 2 M rows). 2. For each combination of values, say how probable it is. 3. If you subscribe to the axioms of probability, those numbers must sum to 1. A B C Prob A C 0.30 B 0.10

54 Using the Joint One you have the JD you can ask for the probability of any logical expression involving your attribute P( E) = rows matching E P(row) Abstract: Predict whether income exceeds $50K/yr based on census data. Also known as "Census Income" dataset. [Kohavi, 1996] Number of Instances: 48,842 Number of Attributes: 14 (in UCI s copy of dataset); 3 (here)

55 Using the Joint P(Poor Male) = P( E) = rows matching E P(row)

56 Using the Joint P(Poor) = P( E) = rows matching E P(row)

57 Probability - what you need to really, really know Probabilities are cool Random variables and events The Axioms of Probability Independence, binomials, multinomials Conditional probabilities Bayes Rule MLE s, smoothing, and MAPs The joint distribution Inference

58 Inference with the Joint P( E 1 E 2 ) = P( E1 P( E 2 E ) 2 ) = rows matching E 1 rows matching E P(row) and E 2 2 P(row)

59 Inference with the Joint P( E 1 E 2 ) = P( E1 P( E 2 E ) 2 ) = rows matching E 1 rows matching E P(row) and E 2 2 P(row) P(Male Poor) = / = 0.612

60 Estimating the joint distribution Collect some data points Estimate the probability P(E1=e1 ^ ^ En=en) as #(that row appears)/#(any row appears). Gender Hours Wealth g1 h1 w1 g2 h2 w2.. gn hn wn

61 Estimating the joint distribution For each combination of values r: Total = C[r] = 0 Complexity? O(2 d ) d = #attributes (all binary) For each data row r i C[r i ] ++ Total ++ Complexity? O(n) n = total size of input data Gender Hours Wealth g1 h1 w1 = C[r i ]/Total g2 h2 w2.. r i is female,40.5+, poor gn hn wn

62 Estimating the joint distribution For each combination of values r: Total = C[r] = 0 Complexity? O( k i = arity of attribute i d i= 1 k i ) For each data row r i C[r i ] ++ Total ++ Complexity? O(n) n = total size of input data Gender Hours Wealth g1 h1 w1 g2 h2 w2.. gn hn wn

63 Estimating the joint distribution For each combination of values r: Total = C[r] = 0 For each data row r i C[r i ] ++ Total ++ Complexity? O( i= 1 k i = arity of attribute i Complexity? O(n) n = total size of input data d k i ) Gender Hours Wealth g1 h1 w1 g2 h2 w2.. gn hn wn

64 Estimating the joint distribution For each data row r i If r i not in hash tables C,Total: Insert C[r i ] = 0 C[r i ] ++ Total ++ Complexity? O(m) m = size of the model Complexity? O(n) n = total size of input data Gender Hours Wealth g1 h1 w1 g2 h2 w2.. gn hn wn

65 Probability - what you need to really, really know Probabilities are cool Random variables and events The Axioms of Probability Independence, binomials, multinomials Conditional probabilities Bayes Rule MLE s, smoothing, and MAPs The joint distribution Inference Density estimation and classification

66 Density Estimation Our Joint Distribution learner is our first example of something called Density Estimation A Density Estimator learns a mapping from a set of attributes values to a Probability Input Attributes Density Estimator Probability Copyright Andrew W. Moore

67 Density Estimation Compare it against the two other major kinds of models: Input Attributes Input Attributes Classifier Density Estimator Prediction of categorical output or class One of a few discrete values Probability Input Attributes Regressor Prediction of real-valued output Copyright Andrew W. Moore

68 Density Estimation è Classification Input Attributes x Classifier Prediction of categorical output One of y1,., yk Input Attributes Class Density Estimator ^ P(x,y) To classify x ^ ^ 1. Use your estimator to compute P(x,y1),., P(x,yk) 2. Return the class y* with the highest predicted probability ^ ^ ^ ^ Ideally is correct with P(x,y*) = P(x,y*)/(P(x,y1) +. + P(x,yk)) Binary case: predict POS if ^ P(x)>0.5 Copyright Andrew W. Moore

69 Classification vs Density Estimation Classification Density Estimation

70 Bayes Classifiers If we can do inference over Pr(X 1,X d,y) in particular compute Pr(X 1 X d Y) and Pr(Y). And then we can use Bayes rule to compute Pr( Y X,..., X ) = 1 d Pr( X1,..., X d Y ) Pr( Y ) Pr( X,..., X ) 1 d

71 Summary: The Bad News Density estimation by directly learning the joint is trivial, mindless and dangerous Andrew s joke Density estimation by directly learning the joint is hopeless unless you have some combination of Very few attributes Attributes with low arity Lots and lots of data Otherwise you can t estimate all the row frequencies Copyright Andrew W. Moore

72 Part of a Joint Distribution A B C D E p is the effect of the is the effect of a The effect of this to this effect : be the effect of the not the effect of any does not affect the general does not affect the question any manner affect the principle

73 Probability - what you need to really, really know Probabilities are cool Random variables and events The Axioms of Probability Independence, binomials, multinomials Conditional probabilities Bayes Rule MLE s, smoothing, and MAPs The joint distribution Inference Density estimation and classification Naïve Bayes density estimators and classifiers

74 Naïve Density Estimation The problem with the Joint Estimator is that it just mirrors the training data. (and is also really, really hard to compute) We need something which generalizes more usefully. Copyright Andrew W. Moore The naïve model generalizes strongly: Assume that each attribute is distributed independently of any of the other attributes.

75 Naïve Distribution General Case Suppose X 1,X 2,,X d are independently distributed. Pr( X = x1,..., X d = xd ) = Pr( X1 = x1 )... Pr( X d = 1 d x ) So if we have a Naïve Distribution we can construct any row of the implied Joint Distribution on demand. How do we learn this? Copyright Andrew W. Moore

76 Learning a Naïve Density Estimator #records with X i = xi P( X i = xi ) = #records MLE P( X i = x i ) = #records with X i = x #records + m i + mq Dirichlet (MAP) Another trivial learning algorithm! Copyright Andrew W. Moore

77 Probability - what you need to really, really know Probabilities are cool Random variables and events The Axioms of Probability Independence, binomials, multinomials Conditional probabilities Bayes Rule MLE s, smoothing, and MAPs The joint distribution Inference Density estimation and classification Naïve Bayes density estimators and classifiers Conditional independence more on this tomorrow!

Course Overview and Review of Probability. William W. Cohen Machine Learning

Course Overview and Review of Probability. William W. Cohen Machine Learning Course Overview and Review of Probability William W. Cohen Machine Learning 10-601 OVERVIEW OF 601 SPRING 2016 SECTION B This is 10-601B (I m William Cohen) Main information page: the class wiki My home

More information

Machine Learning

Machine Learning Machine Learning 10-701 Tom M. Mitchell Machine Learning Department Carnegie Mellon University January 13, 2011 Today: The Big Picture Overfitting Review: probability Readings: Decision trees, overfiting

More information

KEY CONCEPTS IN PROBABILITY: SMOOTHING, MLE, AND MAP

KEY CONCEPTS IN PROBABILITY: SMOOTHING, MLE, AND MAP KEY CONCEPTS IN PROBABILITY: SMOOTHING, MLE, AND MAP Outline MAPs and MLEs catchup from last week Joint Distributions a new learner Naïve Bayes another new learner Administrivia Homeworks: Due tomorrow

More information

Probabilistic and Bayesian Analytics

Probabilistic and Bayesian Analytics Probabilistic and Bayesian Analytics Note to other teachers and users of these slides. Andrew would be delighted if you found this source material useful in giving your own lectures. Feel free to use these

More information

Machine Learning

Machine Learning Machine Learning 10-601 Tom M. Mitchell Machine Learning Department Carnegie Mellon University August 30, 2017 Today: Decision trees Overfitting The Big Picture Coming soon Probabilistic learning MLE,

More information

Machine Learning

Machine Learning Machine Learning 10-601 Tom M. Mitchell Machine Learning Department Carnegie Mellon University January 14, 2015 Today: The Big Picture Overfitting Review: probability Readings: Decision trees, overfiting

More information

Probability Basics. Robot Image Credit: Viktoriya Sukhanova 123RF.com

Probability Basics. Robot Image Credit: Viktoriya Sukhanova 123RF.com Probability Basics These slides were assembled by Eric Eaton, with grateful acknowledgement of the many others who made their course materials freely available online. Feel free to reuse or adapt these

More information

Probabilistic and Bayesian Analytics Based on a Tutorial by Andrew W. Moore, Carnegie Mellon University

Probabilistic and Bayesian Analytics Based on a Tutorial by Andrew W. Moore, Carnegie Mellon University robabilistic and Bayesian Analytics Based on a Tutorial by Andrew W. Moore, Carnegie Mellon Uniersity www.cs.cmu.edu/~awm/tutorials Discrete Random Variables A is a Boolean-alued random ariable if A denotes

More information

CISC 4631 Data Mining

CISC 4631 Data Mining CISC 4631 Data Mining Lecture 06: ayes Theorem Theses slides are based on the slides by Tan, Steinbach and Kumar (textbook authors) Eamonn Koegh (UC Riverside) Andrew Moore (CMU/Google) 1 Naïve ayes Classifier

More information

Probability Review Lecturer: Ji Liu Thank Jerry Zhu for sharing his slides

Probability Review Lecturer: Ji Liu Thank Jerry Zhu for sharing his slides Probability Review Lecturer: Ji Liu Thank Jerry Zhu for sharing his slides slide 1 Inference with Bayes rule: Example In a bag there are two envelopes one has a red ball (worth $100) and a black ball one

More information

Sources of Uncertainty

Sources of Uncertainty Probability Basics Sources of Uncertainty The world is a very uncertain place... Uncertain inputs Missing data Noisy data Uncertain knowledge Mul>ple causes lead to mul>ple effects Incomplete enumera>on

More information

Lecture 9: Naive Bayes, SVM, Kernels. Saravanan Thirumuruganathan

Lecture 9: Naive Bayes, SVM, Kernels. Saravanan Thirumuruganathan Lecture 9: Naive Bayes, SVM, Kernels Instructor: Outline 1 Probability basics 2 Probabilistic Interpretation of Classification 3 Bayesian Classifiers, Naive Bayes 4 Support Vector Machines Probability

More information

Basic Probability and Statistics

Basic Probability and Statistics Basic Probability and Statistics Yingyu Liang yliang@cs.wisc.edu Computer Sciences Department University of Wisconsin, Madison [based on slides from Jerry Zhu, Mark Craven] slide 1 Reasoning with Uncertainty

More information

Bayes and Naïve Bayes Classifiers CS434

Bayes and Naïve Bayes Classifiers CS434 Bayes and Naïve Bayes Classifiers CS434 In this lectre 1. Review some basic probability concepts 2. Introdce a sefl probabilistic rle - Bayes rle 3. Introdce the learning algorithm based on Bayes rle (ths

More information

CS 4649/7649 Robot Intelligence: Planning

CS 4649/7649 Robot Intelligence: Planning CS 4649/7649 Robot Intelligence: Planning Probability Primer Sungmoon Joo School of Interactive Computing College of Computing Georgia Institute of Technology S. Joo (sungmoon.joo@cc.gatech.edu) 1 *Slides

More information

Bayes Nets for representing

Bayes Nets for representing Bayes Nets for representing and reasoning about uncertainty Note to other teachers and users of these slides. Andrew would be delighted if you found this source material useful in giving your own lectures.

More information

Naïve Bayes. Jia-Bin Huang. Virginia Tech Spring 2019 ECE-5424G / CS-5824

Naïve Bayes. Jia-Bin Huang. Virginia Tech Spring 2019 ECE-5424G / CS-5824 Naïve Bayes Jia-Bin Huang ECE-5424G / CS-5824 Virginia Tech Spring 2019 Administrative HW 1 out today. Please start early! Office hours Chen: Wed 4pm-5pm Shih-Yang: Fri 3pm-4pm Location: Whittemore 266

More information

Bayesian Approaches Data Mining Selected Technique

Bayesian Approaches Data Mining Selected Technique Bayesian Approaches Data Mining Selected Technique Henry Xiao xiao@cs.queensu.ca School of Computing Queen s University Henry Xiao CISC 873 Data Mining p. 1/17 Probabilistic Bases Review the fundamentals

More information

Why Probability? It's the right way to look at the world.

Why Probability? It's the right way to look at the world. Probability Why Probability? It's the right way to look at the world. Discrete Random Variables We denote discrete random variables with capital letters. A boolean random variable may be either true or

More information

Introduction to Bayesian Learning. Machine Learning Fall 2018

Introduction to Bayesian Learning. Machine Learning Fall 2018 Introduction to Bayesian Learning Machine Learning Fall 2018 1 What we have seen so far What does it mean to learn? Mistake-driven learning Learning by counting (and bounding) number of mistakes PAC learnability

More information

Machine Learning

Machine Learning Machine Learning 10-601 Tom M. Mitchell Machine Learning Department Carnegie Mellon University January 26, 2015 Today: Bayes Classifiers Conditional Independence Naïve Bayes Readings: Mitchell: Naïve Bayes

More information

Aarti Singh. Lecture 2, January 13, Reading: Bishop: Chap 1,2. Slides courtesy: Eric Xing, Andrew Moore, Tom Mitchell

Aarti Singh. Lecture 2, January 13, Reading: Bishop: Chap 1,2. Slides courtesy: Eric Xing, Andrew Moore, Tom Mitchell Machine Learning 0-70/5 70/5-78, 78, Spring 00 Probability 0 Aarti Singh Lecture, January 3, 00 f(x) µ x Reading: Bishop: Chap, Slides courtesy: Eric Xing, Andrew Moore, Tom Mitchell Announcements Homework

More information

Bayesian Analysis for Natural Language Processing Lecture 2

Bayesian Analysis for Natural Language Processing Lecture 2 Bayesian Analysis for Natural Language Processing Lecture 2 Shay Cohen February 4, 2013 Administrativia The class has a mailing list: coms-e6998-11@cs.columbia.edu Need two volunteers for leading a discussion

More information

VC-dimension for characterizing classifiers

VC-dimension for characterizing classifiers VC-dimension for characterizing classifiers Note to other teachers and users of these slides. Andrew would be delighted if you found this source material useful in giving your own lectures. Feel free to

More information

Algorithms for Playing and Solving games*

Algorithms for Playing and Solving games* Algorithms for Playing and Solving games* Andrew W. Moore Professor School of Computer Science Carnegie Mellon University www.cs.cmu.edu/~awm awm@cs.cmu.edu 412-268-7599 * Two Player Zero-sum Discrete

More information

CS4705. Probability Review and Naïve Bayes. Slides from Dragomir Radev

CS4705. Probability Review and Naïve Bayes. Slides from Dragomir Radev CS4705 Probability Review and Naïve Bayes Slides from Dragomir Radev Classification using a Generative Approach Previously on NLP discriminative models P C D here is a line with all the social media posts

More information

CS 188: Artificial Intelligence Spring Today

CS 188: Artificial Intelligence Spring Today CS 188: Artificial Intelligence Spring 2006 Lecture 9: Naïve Bayes 2/14/2006 Dan Klein UC Berkeley Many slides from either Stuart Russell or Andrew Moore Bayes rule Today Expectations and utilities Naïve

More information

The Naïve Bayes Classifier. Machine Learning Fall 2017

The Naïve Bayes Classifier. Machine Learning Fall 2017 The Naïve Bayes Classifier Machine Learning Fall 2017 1 Today s lecture The naïve Bayes Classifier Learning the naïve Bayes Classifier Practical concerns 2 Today s lecture The naïve Bayes Classifier Learning

More information

Soft Computing. Lecture Notes on Machine Learning. Matteo Mattecci.

Soft Computing. Lecture Notes on Machine Learning. Matteo Mattecci. Soft Computing Lecture Notes on Machine Learning Matteo Mattecci matteucci@elet.polimi.it Department of Electronics and Information Politecnico di Milano Matteo Matteucci c Lecture Notes on Machine Learning

More information

Discrete Mathematics and Probability Theory Spring 2016 Rao and Walrand Note 14

Discrete Mathematics and Probability Theory Spring 2016 Rao and Walrand Note 14 CS 70 Discrete Mathematics and Probability Theory Spring 2016 Rao and Walrand Note 14 Introduction One of the key properties of coin flips is independence: if you flip a fair coin ten times and get ten

More information

1 Probabilities. 1.1 Basics 1 PROBABILITIES

1 Probabilities. 1.1 Basics 1 PROBABILITIES 1 PROBABILITIES 1 Probabilities Probability is a tricky word usually meaning the likelyhood of something occuring or how frequent something is. Obviously, if something happens frequently, then its probability

More information

Introduction to Machine Learning

Introduction to Machine Learning Introduction to Machine Learning CS4375 --- Fall 2018 Bayesian a Learning Reading: Sections 13.1-13.6, 20.1-20.2, R&N Sections 6.1-6.3, 6.7, 6.9, Mitchell 1 Uncertainty Most real-world problems deal with

More information

UVA CS 6316/4501 Fall 2016 Machine Learning. Lecture 11: Probability Review. Dr. Yanjun Qi. University of Virginia. Department of Computer Science

UVA CS 6316/4501 Fall 2016 Machine Learning. Lecture 11: Probability Review. Dr. Yanjun Qi. University of Virginia. Department of Computer Science UVA CS 6316/4501 Fall 2016 Machine Learning Lecture 11: Probability Review 10/17/16 Dr. Yanjun Qi University of Virginia Department of Computer Science 1 Announcements: Schedule Midterm Nov. 26 Wed / 3:30pm

More information

Introduction to Machine Learning

Introduction to Machine Learning Uncertainty Introduction to Machine Learning CS4375 --- Fall 2018 a Bayesian Learning Reading: Sections 13.1-13.6, 20.1-20.2, R&N Sections 6.1-6.3, 6.7, 6.9, Mitchell Most real-world problems deal with

More information

Bayesian Networks: Independencies and Inference

Bayesian Networks: Independencies and Inference Bayesian Networks: Independencies and Inference Scott Davies and Andrew Moore Note to other teachers and users of these slides. Andrew and Scott would be delighted if you found this source material useful

More information

Information Gain. Andrew W. Moore Professor School of Computer Science Carnegie Mellon University.

Information Gain. Andrew W. Moore Professor School of Computer Science Carnegie Mellon University. te to other teachers and users of these slides. Andrew would be delighted if you found this source material useful in giving your own lectures. Feel free to use these slides verbatim, or to modify them

More information

Probabilistic modeling. The slides are closely adapted from Subhransu Maji s slides

Probabilistic modeling. The slides are closely adapted from Subhransu Maji s slides Probabilistic modeling The slides are closely adapted from Subhransu Maji s slides Overview So far the models and algorithms you have learned about are relatively disconnected Probabilistic modeling framework

More information

Uncertain Knowledge and Bayes Rule. George Konidaris

Uncertain Knowledge and Bayes Rule. George Konidaris Uncertain Knowledge and Bayes Rule George Konidaris gdk@cs.brown.edu Fall 2018 Knowledge Logic Logical representations are based on: Facts about the world. Either true or false. We may not know which.

More information

Classification & Information Theory Lecture #8

Classification & Information Theory Lecture #8 Classification & Information Theory Lecture #8 Introduction to Natural Language Processing CMPSCI 585, Fall 2007 University of Massachusetts Amherst Andrew McCallum Today s Main Points Automatically categorizing

More information

VC-dimension for characterizing classifiers

VC-dimension for characterizing classifiers VC-dimension for characterizing classifiers Note to other teachers and users of these slides. Andrew would be delighted if you found this source material useful in giving your own lectures. Feel free to

More information

Discrete Mathematics and Probability Theory Spring 2014 Anant Sahai Note 10

Discrete Mathematics and Probability Theory Spring 2014 Anant Sahai Note 10 EECS 70 Discrete Mathematics and Probability Theory Spring 2014 Anant Sahai Note 10 Introduction to Basic Discrete Probability In the last note we considered the probabilistic experiment where we flipped

More information

Basic Probabilistic Reasoning SEG

Basic Probabilistic Reasoning SEG Basic Probabilistic Reasoning SEG 7450 1 Introduction Reasoning under uncertainty using probability theory Dealing with uncertainty is one of the main advantages of an expert system over a simple decision

More information

Naïve Bayes classification

Naïve Bayes classification Naïve Bayes classification 1 Probability theory Random variable: a variable whose possible values are numerical outcomes of a random phenomenon. Examples: A person s height, the outcome of a coin toss

More information

Introduction to AI Learning Bayesian networks. Vibhav Gogate

Introduction to AI Learning Bayesian networks. Vibhav Gogate Introduction to AI Learning Bayesian networks Vibhav Gogate Inductive Learning in a nutshell Given: Data Examples of a function (X, F(X)) Predict function F(X) for new examples X Discrete F(X): Classification

More information

Machine Learning. Yuh-Jye Lee. March 1, Lab of Data Science and Machine Intelligence Dept. of Applied Math. at NCTU

Machine Learning. Yuh-Jye Lee. March 1, Lab of Data Science and Machine Intelligence Dept. of Applied Math. at NCTU Machine Learning Yuh-Jye Lee Lab of Data Science and Machine Intelligence Dept. of Applied Math. at NCTU March 1, 2017 1 / 13 Bayes Rule Bayes Rule Assume that {B 1, B 2,..., B k } is a partition of S

More information

7.1 What is it and why should we care?

7.1 What is it and why should we care? Chapter 7 Probability In this section, we go over some simple concepts from probability theory. We integrate these with ideas from formal language theory in the next chapter. 7.1 What is it and why should

More information

Bayesian Methods: Naïve Bayes

Bayesian Methods: Naïve Bayes Bayesian Methods: aïve Bayes icholas Ruozzi University of Texas at Dallas based on the slides of Vibhav Gogate Last Time Parameter learning Learning the parameter of a simple coin flipping model Prior

More information

Bayesian Updating: Discrete Priors: Spring

Bayesian Updating: Discrete Priors: Spring Bayesian Updating: Discrete Priors: 18.05 Spring 2017 http://xkcd.com/1236/ Learning from experience Which treatment would you choose? 1. Treatment 1: cured 100% of patients in a trial. 2. Treatment 2:

More information

Bayes Nets for representing and reasoning about uncertainty

Bayes Nets for representing and reasoning about uncertainty Bayes Nets for representing and reasoning about uncertainty Note to other teachers and users of these slides. ndrew would be delighted if you found this source material useful in giving your own lectures.

More information

Statistics for Business and Economics

Statistics for Business and Economics Statistics for Business and Economics Basic Probability Learning Objectives In this lecture(s), you learn: Basic probability concepts Conditional probability To use Bayes Theorem to revise probabilities

More information

DATA MINING: NAÏVE BAYES

DATA MINING: NAÏVE BAYES DATA MINING: NAÏVE BAYES 1 Naïve Bayes Classifier Thomas Bayes 1702-1761 We will start off with some mathematical background. But first we start with some visual intuition. 2 Grasshoppers Antenna Length

More information

Probability 1 (MATH 11300) lecture slides

Probability 1 (MATH 11300) lecture slides Probability 1 (MATH 11300) lecture slides Márton Balázs School of Mathematics University of Bristol Autumn, 2015 December 16, 2015 To know... http://www.maths.bris.ac.uk/ mb13434/prob1/ m.balazs@bristol.ac.uk

More information

Lecture 1: Basics of Probability

Lecture 1: Basics of Probability Lecture 1: Basics of Probability (Luise-Vitetta, Chapter 8) Why probability in data science? Data acquisition is noisy Sampling/quantization external factors: If you record your voice saying machine learning

More information

Gaussian Mixture Models, Expectation Maximization

Gaussian Mixture Models, Expectation Maximization Gaussian Mixture Models, Expectation Maximization Instructor: Jessica Wu Harvey Mudd College The instructor gratefully acknowledges Andrew Ng (Stanford), Andrew Moore (CMU), Eric Eaton (UPenn), David Kauchak

More information

CS6220: DATA MINING TECHNIQUES

CS6220: DATA MINING TECHNIQUES CS6220: DATA MINING TECHNIQUES Chapter 8&9: Classification: Part 3 Instructor: Yizhou Sun yzsun@ccs.neu.edu March 12, 2013 Midterm Report Grade Distribution 90-100 10 80-89 16 70-79 8 60-69 4

More information

Naïve Bayes classification. p ij 11/15/16. Probability theory. Probability theory. Probability theory. X P (X = x i )=1 i. Marginal Probability

Naïve Bayes classification. p ij 11/15/16. Probability theory. Probability theory. Probability theory. X P (X = x i )=1 i. Marginal Probability Probability theory Naïve Bayes classification Random variable: a variable whose possible values are numerical outcomes of a random phenomenon. s: A person s height, the outcome of a coin toss Distinguish

More information

CS 188: Artificial Intelligence. Machine Learning

CS 188: Artificial Intelligence. Machine Learning CS 188: Artificial Intelligence Review of Machine Learning (ML) DISCLAIMER: It is insufficient to simply study these slides, they are merely meant as a quick refresher of the high-level ideas covered.

More information

Machine Learning. CS Spring 2015 a Bayesian Learning (I) Uncertainty

Machine Learning. CS Spring 2015 a Bayesian Learning (I) Uncertainty Machine Learning CS6375 --- Spring 2015 a Bayesian Learning (I) 1 Uncertainty Most real-world problems deal with uncertain information Diagnosis: Likely disease given observed symptoms Equipment repair:

More information

Discrete Mathematics and Probability Theory Fall 2010 Tse/Wagner MT 2 Soln

Discrete Mathematics and Probability Theory Fall 2010 Tse/Wagner MT 2 Soln CS 70 Discrete Mathematics and Probability heory Fall 00 se/wagner M Soln Problem. [Rolling Dice] (5 points) You roll a fair die three times. Consider the following events: A first roll is a 3 B second

More information

Discrete Probability and State Estimation

Discrete Probability and State Estimation 6.01, Fall Semester, 2007 Lecture 12 Notes 1 MASSACHVSETTS INSTITVTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science 6.01 Introduction to EECS I Fall Semester, 2007 Lecture 12 Notes

More information

Lecture 1: Probability Fundamentals

Lecture 1: Probability Fundamentals Lecture 1: Probability Fundamentals IB Paper 7: Probability and Statistics Carl Edward Rasmussen Department of Engineering, University of Cambridge January 22nd, 2008 Rasmussen (CUED) Lecture 1: Probability

More information

Naïve Bayes. Vibhav Gogate The University of Texas at Dallas

Naïve Bayes. Vibhav Gogate The University of Texas at Dallas Naïve Bayes Vibhav Gogate The University of Texas at Dallas Supervised Learning of Classifiers Find f Given: Training set {(x i, y i ) i = 1 n} Find: A good approximation to f : X Y Examples: what are

More information

1 Probabilities. 1.1 Basics 1 PROBABILITIES

1 Probabilities. 1.1 Basics 1 PROBABILITIES 1 PROBABILITIES 1 Probabilities Probability is a tricky word usually meaning the likelyhood of something occuring or how frequent something is. Obviously, if something happens frequently, then its probability

More information

Lecture 10: Probability distributions TUESDAY, FEBRUARY 19, 2019

Lecture 10: Probability distributions TUESDAY, FEBRUARY 19, 2019 Lecture 10: Probability distributions DANIEL WELLER TUESDAY, FEBRUARY 19, 2019 Agenda What is probability? (again) Describing probabilities (distributions) Understanding probabilities (expectation) Partial

More information

CHAPTER 2 Estimating Probabilities

CHAPTER 2 Estimating Probabilities CHAPTER 2 Estimating Probabilities Machine Learning Copyright c 2017. Tom M. Mitchell. All rights reserved. *DRAFT OF September 16, 2017* *PLEASE DO NOT DISTRIBUTE WITHOUT AUTHOR S PERMISSION* This is

More information

Machine Learning. Hal Daumé III. Computer Science University of Maryland CS 421: Introduction to Artificial Intelligence 8 May 2012

Machine Learning. Hal Daumé III. Computer Science University of Maryland CS 421: Introduction to Artificial Intelligence 8 May 2012 Machine Learning Hal Daumé III Computer Science University of Maryland me@hal3.name CS 421 Introduction to Artificial Intelligence 8 May 2012 g 1 Many slides courtesy of Dan Klein, Stuart Russell, or Andrew

More information

Topic 2 Probability. Basic probability Conditional probability and independence Bayes rule Basic reliability

Topic 2 Probability. Basic probability Conditional probability and independence Bayes rule Basic reliability Topic 2 Probability Basic probability Conditional probability and independence Bayes rule Basic reliability Random process: a process whose outcome can not be predicted with certainty Examples: rolling

More information

MATH MW Elementary Probability Course Notes Part I: Models and Counting

MATH MW Elementary Probability Course Notes Part I: Models and Counting MATH 2030 3.00MW Elementary Probability Course Notes Part I: Models and Counting Tom Salisbury salt@yorku.ca York University Winter 2010 Introduction [Jan 5] Probability: the mathematics used for Statistics

More information

Machine Learning and Data Mining. Bayes Classifiers. Prof. Alexander Ihler

Machine Learning and Data Mining. Bayes Classifiers. Prof. Alexander Ihler + Machine Learning and Data Mining Bayes Classifiers Prof. Alexander Ihler A basic classifier Training data D={x (i),y (i) }, Classifier f(x ; D) Discrete feature vector x f(x ; D) is a con@ngency table

More information

Naïve Bayes Introduction to Machine Learning. Matt Gormley Lecture 18 Oct. 31, 2018

Naïve Bayes Introduction to Machine Learning. Matt Gormley Lecture 18 Oct. 31, 2018 10-601 Introduction to Machine Learning Machine Learning Department School of Computer Science Carnegie Mellon University Naïve Bayes Matt Gormley Lecture 18 Oct. 31, 2018 1 Reminders Homework 6: PAC Learning

More information

Machine Learning

Machine Learning Machine Learning 10-601 Tom M. Mitchell Machine Learning Department Carnegie Mellon University September 22, 2011 Today: MLE and MAP Bayes Classifiers Naïve Bayes Readings: Mitchell: Naïve Bayes and Logistic

More information

UVA CS / Introduc8on to Machine Learning and Data Mining

UVA CS / Introduc8on to Machine Learning and Data Mining UVA CS 4501-001 / 6501 007 Introduc8on to Machine Learning and Data Mining Lecture 13: Probability and Sta3s3cs Review (cont.) + Naïve Bayes Classifier Yanjun Qi / Jane, PhD University of Virginia Department

More information

Lecture 10: Introduction to reasoning under uncertainty. Uncertainty

Lecture 10: Introduction to reasoning under uncertainty. Uncertainty Lecture 10: Introduction to reasoning under uncertainty Introduction to reasoning under uncertainty Review of probability Axioms and inference Conditional probability Probability distributions COMP-424,

More information

Readings: K&F: 16.3, 16.4, Graphical Models Carlos Guestrin Carnegie Mellon University October 6 th, 2008

Readings: K&F: 16.3, 16.4, Graphical Models Carlos Guestrin Carnegie Mellon University October 6 th, 2008 Readings: K&F: 16.3, 16.4, 17.3 Bayesian Param. Learning Bayesian Structure Learning Graphical Models 10708 Carlos Guestrin Carnegie Mellon University October 6 th, 2008 10-708 Carlos Guestrin 2006-2008

More information

MACHINE LEARNING INTRODUCTION: STRING CLASSIFICATION

MACHINE LEARNING INTRODUCTION: STRING CLASSIFICATION MACHINE LEARNING INTRODUCTION: STRING CLASSIFICATION THOMAS MAILUND Machine learning means different things to different people, and there is no general agreed upon core set of algorithms that must be

More information

Estimating Parameters

Estimating Parameters Machine Learning 10-601 Tom M. Mitchell Machine Learning Department Carnegie Mellon University September 13, 2012 Today: Bayes Classifiers Naïve Bayes Gaussian Naïve Bayes Readings: Mitchell: Naïve Bayes

More information

CS 331: Artificial Intelligence Probability I. Dealing with Uncertainty

CS 331: Artificial Intelligence Probability I. Dealing with Uncertainty CS 331: Artificial Intelligence Probability I Thanks to Andrew Moore for some course material 1 Dealing with Uncertainty We want to get to the point where we can reason with uncertainty This will require

More information

Bayesian Models in Machine Learning

Bayesian Models in Machine Learning Bayesian Models in Machine Learning Lukáš Burget Escuela de Ciencias Informáticas 2017 Buenos Aires, July 24-29 2017 Frequentist vs. Bayesian Frequentist point of view: Probability is the frequency of

More information

2/3/04. Syllabus. Probability Lecture #2. Grading. Probability Theory. Events and Event Spaces. Experiments and Sample Spaces

2/3/04. Syllabus. Probability Lecture #2. Grading. Probability Theory. Events and Event Spaces. Experiments and Sample Spaces Probability Lecture #2 Introduction to Natural Language Processing CMPSCI 585, Spring 2004 University of Massachusetts Amherst Andrew McCallum Syllabus Probability and Information Theory Spam filtering

More information

Great Theoretical Ideas in Computer Science

Great Theoretical Ideas in Computer Science 15-251 Great Theoretical Ideas in Computer Science Probability Theory: Counting in Terms of Proportions Lecture 10 (September 27, 2007) Some Puzzles Teams A and B are equally good In any one game, each

More information

CS6220: DATA MINING TECHNIQUES

CS6220: DATA MINING TECHNIQUES CS6220: DATA MINING TECHNIQUES Matrix Data: Classification: Part 2 Instructor: Yizhou Sun yzsun@ccs.neu.edu September 21, 2014 Methods to Learn Matrix Data Set Data Sequence Data Time Series Graph & Network

More information

9/12/17. Types of learning. Modeling data. Supervised learning: Classification. Supervised learning: Regression. Unsupervised learning: Clustering

9/12/17. Types of learning. Modeling data. Supervised learning: Classification. Supervised learning: Regression. Unsupervised learning: Clustering Types of learning Modeling data Supervised: we know input and targets Goal is to learn a model that, given input data, accurately predicts target data Unsupervised: we know the input only and want to make

More information

Bayesian Classifiers, Conditional Independence and Naïve Bayes. Required reading: Naïve Bayes and Logistic Regression (available on class website)

Bayesian Classifiers, Conditional Independence and Naïve Bayes. Required reading: Naïve Bayes and Logistic Regression (available on class website) Bayesian Classifiers, Conditional Independence and Naïve Bayes Required reading: Naïve Bayes and Logistic Regression (available on class website) Machine Learning 10-701 Tom M. Mitchell Machine Learning

More information

Generative Learning. INFO-4604, Applied Machine Learning University of Colorado Boulder. November 29, 2018 Prof. Michael Paul

Generative Learning. INFO-4604, Applied Machine Learning University of Colorado Boulder. November 29, 2018 Prof. Michael Paul Generative Learning INFO-4604, Applied Machine Learning University of Colorado Boulder November 29, 2018 Prof. Michael Paul Generative vs Discriminative The classification algorithms we have seen so far

More information

Machine Learning: Homework Assignment 2 Solutions

Machine Learning: Homework Assignment 2 Solutions 10-601 Machine Learning: Homework Assignment 2 Solutions Professor Tom Mitchell Carnegie Mellon University January 21, 2009 The assignment is due at 1:30pm (beginning of class) on Monday, February 2, 2009.

More information

P (A) = P (B) = P (C) = P (D) =

P (A) = P (B) = P (C) = P (D) = STAT 145 CHAPTER 12 - PROBABILITY - STUDENT VERSION The probability of a random event, is the proportion of times the event will occur in a large number of repititions. For example, when flipping a coin,

More information

Some Probability and Statistics

Some Probability and Statistics Some Probability and Statistics David M. Blei COS424 Princeton University February 13, 2012 Card problem There are three cards Red/Red Red/Black Black/Black I go through the following process. Close my

More information

What is Probability? Probability. Sample Spaces and Events. Simple Event

What is Probability? Probability. Sample Spaces and Events. Simple Event What is Probability? Probability Peter Lo Probability is the numerical measure of likelihood that the event will occur. Simple Event Joint Event Compound Event Lies between 0 & 1 Sum of events is 1 1.5

More information

Bayesian Networks Structure Learning (cont.)

Bayesian Networks Structure Learning (cont.) Koller & Friedman Chapters (handed out): Chapter 11 (short) Chapter 1: 1.1, 1., 1.3 (covered in the beginning of semester) 1.4 (Learning parameters for BNs) Chapter 13: 13.1, 13.3.1, 13.4.1, 13.4.3 (basic

More information

DEPARTMENT OF COMPUTER SCIENCE Autumn Semester MACHINE LEARNING AND ADAPTIVE INTELLIGENCE

DEPARTMENT OF COMPUTER SCIENCE Autumn Semester MACHINE LEARNING AND ADAPTIVE INTELLIGENCE Data Provided: None DEPARTMENT OF COMPUTER SCIENCE Autumn Semester 203 204 MACHINE LEARNING AND ADAPTIVE INTELLIGENCE 2 hours Answer THREE of the four questions. All questions carry equal weight. Figures

More information

MATH2206 Prob Stat/20.Jan Weekly Review 1-2

MATH2206 Prob Stat/20.Jan Weekly Review 1-2 MATH2206 Prob Stat/20.Jan.2017 Weekly Review 1-2 This week I explained the idea behind the formula of the well-known statistic standard deviation so that it is clear now why it is a measure of dispersion

More information

MLE/MAP + Naïve Bayes

MLE/MAP + Naïve Bayes 10-601 Introduction to Machine Learning Machine Learning Department School of Computer Science Carnegie Mellon University MLE/MAP + Naïve Bayes Matt Gormley Lecture 19 March 20, 2018 1 Midterm Exam Reminders

More information

Probability Theory for Machine Learning. Chris Cremer September 2015

Probability Theory for Machine Learning. Chris Cremer September 2015 Probability Theory for Machine Learning Chris Cremer September 2015 Outline Motivation Probability Definitions and Rules Probability Distributions MLE for Gaussian Parameter Estimation MLE and Least Squares

More information

The Bayes classifier

The Bayes classifier The Bayes classifier Consider where is a random vector in is a random variable (depending on ) Let be a classifier with probability of error/risk given by The Bayes classifier (denoted ) is the optimal

More information

Behavioral Data Mining. Lecture 2

Behavioral Data Mining. Lecture 2 Behavioral Data Mining Lecture 2 Autonomy Corp Bayes Theorem Bayes Theorem P(A B) = probability of A given that B is true. P(A B) = P(B A)P(A) P(B) In practice we are most interested in dealing with events

More information

Formalizing Probability. Choosing the Sample Space. Probability Measures

Formalizing Probability. Choosing the Sample Space. Probability Measures Formalizing Probability Choosing the Sample Space What do we assign probability to? Intuitively, we assign them to possible events (things that might happen, outcomes of an experiment) Formally, we take

More information

1 : Introduction. 1 Course Overview. 2 Notation. 3 Representing Multivariate Distributions : Probabilistic Graphical Models , Spring 2014

1 : Introduction. 1 Course Overview. 2 Notation. 3 Representing Multivariate Distributions : Probabilistic Graphical Models , Spring 2014 10-708: Probabilistic Graphical Models 10-708, Spring 2014 1 : Introduction Lecturer: Eric P. Xing Scribes: Daniel Silva and Calvin McCarter 1 Course Overview In this lecture we introduce the concept of

More information

CS 361: Probability & Statistics

CS 361: Probability & Statistics February 12, 2018 CS 361: Probability & Statistics Random Variables Monty hall problem Recall the setup, there are 3 doors, behind two of them are indistinguishable goats, behind one is a car. You pick

More information

Where are we? è Five major sec3ons of this course

Where are we? è Five major sec3ons of this course UVA CS 4501-001 / 6501 007 Introduc8on to Machine Learning and Data Mining Lecture 12: Probability and Sta3s3cs Review Yanjun Qi / Jane University of Virginia Department of Computer Science 10/02/14 1

More information

EM (cont.) November 26 th, Carlos Guestrin 1

EM (cont.) November 26 th, Carlos Guestrin 1 EM (cont.) Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University November 26 th, 2007 1 Silly Example Let events be grades in a class w 1 = Gets an A P(A) = ½ w 2 = Gets a B P(B) = µ

More information