Expectation maximization


 Imogen Linda Bradley
 1 years ago
 Views:
Transcription
1 Expectation maximization Subhransu Maji CMSCI 689: Machine Learning 14 April 2015
2 Motivation Suppose you are building a naive Bayes spam classifier. After your are done your boss tells you that there is no money to label the data. You have a probabilistic model that assumes labelled data, but you don't have any labels. Can you still do something? Amazingly you can Treat the labels as hidden variables and try to learn them simultaneously along with the parameters of the model Expectation Maximization (EM) A broad family of algorithms for solving hidden variable problems In today s lecture we will derive EM algorithms for clustering and naive Bayes classification and learn why EM works 2/19
3 Gaussian mixture model for clustering Suppose data comes from a Gaussian Mixture Model (GMM) you have K clusters and the data from the cluster k is drawn from a Gaussian with mean μk and variance σk 2 We will assume that the data comes with labels (we will soon remove this assumption) Generative story of the data: For each example n = 1, 2,.., N Choose a label y n Mult( 1, 2,..., K ) 2 Choose example x n N(µ k, k) Likelihood of the data: NY NY p(d) = p(y n )p(x n y n ) = p(d) = n=1 NY n=1 yn 2 2 y n D 2 exp 3/19 n=1 yn N(x n ; µ yn, xn µ yn y n 2 y n )
4 GMM: known labels Likelihood of the data: p(d) = NY n=1 yn 2 2 y n D 2 exp xn µ yn y n If you knew the labels yn then the maximumlikelihood estimates of the parameters is easy: k = 1 N µ k = [y n = k] n n [y n = k]x n n [y n = k] fraction of examples with label k mean of all the examples with label k 2 k = n [y n = k] x n µ k 2 n [y n = k] variance of all the examples with label k 4/19
5 GMM: unknown labels Now suppose you didn t have labels yn. Analogous to kmeans, one solution is to iterate. Start by guessing the parameters and then repeat the two steps: Estimate labels given the parameters Estimate parameters given the labels In kmeans we assigned each point to a single cluster, also called as hard assignment (point 10 goes to cluster 2) In expectation maximization (EM) we will will use soft assignment (point 10 goes half to cluster 2 and half to cluster 5) Lets define a random variable zn = [z1, z2,, zk] to denote the assignment vector for the n th point Hard assignment: only one of zk is 1, the rest are 0 Soft assignment: zk is positive and sum to 1 5/19
6 GMM: parameter estimation Formally zn,k is the probability that the n th point goes to cluster k z n,k = p(y n = k x n ) = (y n = k, x n ) (x n ) / (y n = k) (x n y n )= k N(x n ; µ k, Given a set of parameters (θk,μk,σk 2 ), zn,k is easy to compute Given zn,k, we can update the parameters (θk,μk,σk 2 ) as: k = 1 N µ k = 2 k = n z n,k n z n,kx n n z n,k n z n,k x n µ k 2 n z n,k 6/19 2 k) fraction of examples with label k mean of all the fractional examples with label k variance of all the fractional examples with label k
7 GMM: example We have replaced the indicator variable [yn = k] with p(yn=k) which is the expectation of [yn=k]. This is our guess of the labels. Just like kmeans the EM is susceptible to local minima. Clustering example: kmeans GMM 7/19
8 The EM framework We have data with observations xn and hidden variables yn, and would like to estimate parameters θ The likelihood of the data and hidden variables: Only xn are known so we can compute the data likelihood by marginalizing out the yn: arameter estimation by maximizing loglikelihood: ML p(d) = Y n p( ) = Y n arg max p(x n,y n ) n y n p(x n,y n ) log y n p(x n,y n ) hard to maximize since the sum is inside the log 8/19
9 Jensen s inequality Given a concave function f and a set of weights λi 0 and ᵢ λᵢ = 1 Jensen s inequality states that f( ᵢ λᵢ xᵢ) ᵢ λᵢ f(xᵢ) This is a direct consequence of concavity f(ax + by) a f(x) + b f(y) when a 0, b 0, a + b = 1 f(y) f(ax+by) f(x) a f(x) + b f(y) 9/19
10 The EM framework Construct a lower bound the loglikelihood using Jensen s inequality L( ) = n Maximize the lower bound: = n n = n, ˆL( ) log y n p(x n,y n ) f log q(y n ) p(x n,y n ) q(y y n ) n p(xn,y n ) q(y n ) log q(y y n ) n y n [q(y n ) log p(x n,y n ) q(y n ) log q(y n )] arg max n y n q(y n ) log p(x n,y n ) independent of θ 10/19 x λ Jensen s inequality
11 Lower bound illustrated Maximizing the lower bound increases the value of the original function if the lower bound touches the function at the current value L( ) ˆL( t+1 ) ˆL( t ) t t+1 11/19
12 An optimal lower bound Any choice of the probability distribution q(yn) is valid as long as the lower bound touches the function at the current estimate of θ" We can the pick the optimal q(yn) by maximizing the lower bound arg max [q(y n ) log p(x n,y n ) q(y n ) log q(y n )] q y n This gives us q(y n ) p(y n x n, t ) roof: use Lagrangian multipliers with sum to one constraint L( t )= ˆL( t ) This is the distributions of the hidden variables conditioned on the data and the current estimate of the parameters This is exactly what we computed in the GMM example 12/19
13 The EM algorithm We have data with observations xn and hidden variables yn, and would like to estimate parameters θ of the distribution p(x θ) EM algorithm Initialize the parameters θ randomly Iterate between the following two steps: E step: Compute probability distribution over the hidden variables q(y n ) p(y n x n, ) M step: Maximize the lower bound arg max q(y n ) log p(x n,y n ) n y n EM algorithm is a great candidate when Mstep can done easily but p(x θ) cannot be easily optimized over θ For e.g. for GMMs it was easy to compute means and variances given the memberships 13/19
14 Naive Bayes: revisited Consider the binary prediction problem Let the data be distributed according to a probability distribution: We can simplify this using the chain rule of probability: Naive Bayes assumption: p (y, x) =p (y, x 1,x 2,...,x D ) p (y, x) =p (y)p (x 1 y)p (x 2 x 1,y)...p (x D x 1,x 2,...,x D DY = p (y) p (x d x 1,x 2,...,x d 1,y) d=1 p (x d x d 0,y)=p (x d y), 8d 0 6= d E.g., The words free and money are independent given spam 1,y) 14/19
15 Naive Bayes: a simple case Case: binary labels and binary features robability of the data: p (y) =Bernoulli( 0 ) p (x d y = 1) = Bernoulli( + d ) p (x d y = 1) = Bernoulli( d ) p (y, x) =p (y) DY d=1 p (x d y) }1+2D parameters = [y=+1] [y= 1] 0 (1 0 ) DY... +[x d=1,y=+1] d (1 + d )[x d=0,y=+1]... d=1 DY d=1 [x d=1,y= 1] d (1 d ) [x d=0,y= 1] // label +1 // label 1 15/19
16 Naive Bayes: parameter estimation Given data we can estimate the parameters by maximizing data likelihood The maximum likelihood estimates are: ˆ 0 = ˆ + d ˆ d = n [y n = +1] N = n [x d,n =1,y n = +1] n [y n = +1] n [x d,n =1,y n = 1] n [y n = 1] // fraction of the data with label as +1 // fraction of the instances with 1 among +1 // fraction of the instances with 1 among 1 16/19
17 Naive Bayes: EM Now suppose you don t have labels yn Initialize the parameters θ randomly E step: compute the distribution over the hidden variables q(yn) DY q(y n = 1) = p(y n =+1 x n, ) / 0 + +[x d,n=1] d (1 + d )[x d,n=0] M step: estimate θ given the guesses n 0 = q(y n = 1) N + d = n [x d,n = 1]q(y n = 1) n q(y n = 1) d = n [x d,n = 1]q(y n = 1) n q(y n = 1) d=1 // fraction of the data with label as +1 // fraction of the instances with 1 among +1 // fraction of the instances with 1 among 1 17/19
18 Summary Expectation maximization A general technique to estimate parameters of probabilistic models when some observations are hidden EM iterates between estimating the hidden variables and optimizing parameters given the hidden variables EM can be seen as a maximization of the lower bound of the data loglikelihood we used Jensen s inequality to switch the logsum to sumlog EM can be used for learning: mixtures of distributions for clustering, e.g. GMM parameters for hidden Markov models (next lecture) topic models in NL probabilistic CA. 18/19
19 Slides credit Some of the slides are based on CIML book by Hal Daume III The figure for the EM lower bound is based on cxwangyi.wordpress.com/2008/11/ Clustering kmeans vs GMM is from github/nicta/mlss/tree/master/clustering/ 19/19
Expectation maximization
Motivatio Expectatio maximizatio Subhrasu Maji CMSCI 689: Machie Learig 14 April 015 Suppose you are builig a aive Bayes spam classifier. After your are oe your boss tells you that there is o moey to label
More informationParametric Unsupervised Learning Expectation Maximization (EM) Lecture 20.a
Parametric Unsupervised Learning Expectation Maximization (EM) Lecture 20.a Some slides are due to Christopher Bishop Limitations of Kmeans Hard assignments of data points to clusters small shift of a
More informationProbabilistic modeling. The slides are closely adapted from Subhransu Maji s slides
Probabilistic modeling The slides are closely adapted from Subhransu Maji s slides Overview So far the models and algorithms you have learned about are relatively disconnected Probabilistic modeling framework
More informationCSCI567: Machine Learning (Spring 2019)
CSCI567: Machine Learning (Spring 2019) Prof. Victor Adamchik U of Southern California Mar. 19, 2019 March 19, 2019 1 / 43 Administration March 19, 2019 2 / 43 Administration TA3 is due this week March
More informationMachine Learning. Gaussian Mixture Models. Zhiyao Duan & Bryan Pardo, Machine Learning: EECS 349 Fall
Machine Learning Gaussian Mixture Models Zhiyao Duan & Bryan Pardo, Machine Learning: EECS 349 Fall 2012 1 The Generative Model POV We think of the data as being generated from some process. We assume
More informationGaussian Mixture Models
Gaussian Mixture Models Pradeep Ravikumar Coinstructor: Manuela Veloso Machine Learning 10701 Some slides courtesy of Eric Xing, Carlos Guestrin (One) bad case for K means Clusters may overlap Some
More informationCOMS 4721: Machine Learning for Data Science Lecture 16, 3/28/2017
COMS 4721: Machine Learning for Data Science Lecture 16, 3/28/2017 Prof. John Paisley Department of Electrical Engineering & Data Science Institute Columbia University SOFT CLUSTERING VS HARD CLUSTERING
More informationClustering Kmeans. Clustering images. Machine Learning CSE546 Carlos Guestrin University of Washington. November 4, 2014.
Clustering Kmeans Machine Learning CSE546 Carlos Guestrin University of Washington November 4, 2014 1 Clustering images Set of Images [Goldberger et al.] 2 1 Kmeans Randomly initialize k centers µ (0)
More informationComputer Vision Group Prof. Daniel Cremers. 6. Mixture Models and ExpectationMaximization
Prof. Daniel Cremers 6. Mixture Models and ExpectationMaximization Motivation Often the introduction of latent (unobserved) random variables into a model can help to express complex (marginal) distributions
More informationMixture Models & EM. Nicholas Ruozzi University of Texas at Dallas. based on the slides of Vibhav Gogate
Mixture Models & EM icholas Ruozzi University of Texas at Dallas based on the slides of Vibhav Gogate Previously We looed at means and hierarchical clustering as mechanisms for unsupervised learning means
More informationMixture Models & EM. Nicholas Ruozzi University of Texas at Dallas. based on the slides of Vibhav Gogate
Mixture Models & EM icholas Ruozzi University of Texas at Dallas based on the slides of Vibhav Gogate Previously We looed at means and hierarchical clustering as mechanisms for unsupervised learning means
More informationClustering Kmeans. Machine Learning CSE546. Sham Kakade University of Washington. November 15, Review: PCA Start: unsupervised learning
Clustering Kmeans Machine Learning CSE546 Sham Kakade University of Washington November 15, 2016 1 Announcements: Project Milestones due date passed. HW3 due on Monday It ll be collaborative HW2 grades
More informationIntroduction to Machine Learning. Maximum Likelihood and Bayesian Inference. Lecturers: Eran Halperin, Lior Wolf
1 Introduction to Machine Learning Maximum Likelihood and Bayesian Inference Lecturers: Eran Halperin, Lior Wolf 201415 We know that X ~ B(n,p), but we do not know p. We get a random sample from X, a
More informationClustering. Professor Ameet Talwalkar. Professor Ameet Talwalkar CS260 Machine Learning Algorithms March 8, / 26
Clustering Professor Ameet Talwalkar Professor Ameet Talwalkar CS26 Machine Learning Algorithms March 8, 217 1 / 26 Outline 1 Administration 2 Review of last lecture 3 Clustering Professor Ameet Talwalkar
More informationIntroduction to Machine Learning. Maximum Likelihood and Bayesian Inference. Lecturers: Eran Halperin, Yishay Mansour, Lior Wolf
1 Introduction to Machine Learning Maximum Likelihood and Bayesian Inference Lecturers: Eran Halperin, Yishay Mansour, Lior Wolf 201314 We know that X ~ B(n,p), but we do not know p. We get a random sample
More informationClustering and Gaussian Mixture Models
Clustering and Gaussian Mixture Models Piyush Rai IIT Kanpur Probabilistic Machine Learning (CS772A) Jan 25, 2016 Probabilistic Machine Learning (CS772A) Clustering and Gaussian Mixture Models 1 Recap
More informationNaïve Bayes classification
Naïve Bayes classification 1 Probability theory Random variable: a variable whose possible values are numerical outcomes of a random phenomenon. Examples: A person s height, the outcome of a coin toss
More informationStatistical Pattern Recognition
Statistical Pattern Recognition Expectation Maximization (EM) and Mixture Models Hamid R. Rabiee Jafar Muhammadi, Mohammad J. Hosseini Spring 2014 http://ce.sharif.edu/courses/9293/2/ce7252 Agenda Expectationmaximization
More informationKMeans and Gaussian Mixture Models
KMeans and Gaussian Mixture Models David Rosenberg New York University October 29, 2016 David Rosenberg (New York University) DSGA 1003 October 29, 2016 1 / 42 KMeans Clustering KMeans Clustering David
More informationStatistical Pattern Recognition
Statistical Pattern Recognition Expectation Maximization (EM) and Mixture Models Hamid R. Rabiee Jafar Muhammadi, Mohammad J. Hosseini Spring 203 http://ce.sharif.edu/courses/992/2/ce725/ Agenda Expectationmaximization
More informationLecture 4: Probabilistic Learning. Estimation Theory. Classification with Probability Distributions
DD2431 Autumn, 2014 1 2 3 Classification with Probability Distributions Estimation Theory Classification in the last lecture we assumed we new: P(y) Prior P(x y) Lielihood x2 x features y {ω 1,..., ω K
More informationMixtures of Gaussians continued
Mixtures of Gaussians continued Machine Learning CSE446 Carlos Guestrin University of Washington May 17, 2013 1 One) bad case for kmeans n Clusters may overlap n Some clusters may be wider than others
More informationExpectation Maximization
Expectation Maximization Bishop PRML Ch. 9 Alireza Ghane c Ghane/Mori 4 6 8 4 6 8 4 6 8 4 6 8 5 5 5 5 5 5 4 6 8 4 4 6 8 4 5 5 5 5 5 5 µ, Σ) α f Learningscale is slightly Parameters is slightly larger larger
More informationNaïve Bayes classification. p ij 11/15/16. Probability theory. Probability theory. Probability theory. X P (X = x i )=1 i. Marginal Probability
Probability theory Naïve Bayes classification Random variable: a variable whose possible values are numerical outcomes of a random phenomenon. s: A person s height, the outcome of a coin toss Distinguish
More informationExpectation Maximization
Expectation Maximization Aaron C. Courville Université de Montréal Note: Material for the slides is taken directly from a presentation prepared by Christopher M. Bishop Learning in DAGs Two things could
More information10701/15781, Machine Learning: Homework 4
10701/15781, Machine Learning: Homewor 4 Aarti Singh Carnegie Mellon University ˆ The assignment is due at 10:30 am beginning of class on Mon, Nov 15, 2010. ˆ Separate you answers into five parts, one
More informationCPSC 540: Machine Learning
CPSC 540: Machine Learning Expectation Maximization Mark Schmidt University of British Columbia Winter 2018 Last Time: Learning with MAR Values We discussed learning with missing at random values in data:
More informationLecture 4: Probabilistic Learning
DD2431 Autumn, 2015 1 Maximum Likelihood Methods Maximum A Posteriori Methods Bayesian methods 2 Classification vs Clustering Heuristic Example: Kmeans Expectation Maximization 3 Maximum Likelihood Methods
More informationExpectation Maximization Algorithm
Expectation Maximization Algorithm Vibhav Gogate The University of Texas at Dallas Slides adapted from Carlos Guestrin, Dan Klein, Luke Zettlemoyer and Dan Weld The Evils of Hard Assignments? Clusters
More informationPerformance Comparison of KMeans and Expectation Maximization with Gaussian Mixture Models for Clustering EE6540 Final Project
Performance Comparison of KMeans and Expectation Maximization with Gaussian Mixture Models for Clustering EE6540 Final Project Devin Cornell & Sushruth Sastry May 2015 1 Abstract In this article, we explore
More informationECE 5984: Introduction to Machine Learning
ECE 5984: Introduction to Machine Learning Topics: (Finish) Expectation Maximization Principal Component Analysis (PCA) Readings: Barber 15.115.4 Dhruv Batra Virginia Tech Administrativia Poster Presentation:
More informationLecture 10. Announcement. Mixture Models II. Topics of This Lecture. This Lecture: Advanced Machine Learning. Recap: GMMs as Latent Variable Models
Advanced Machine Learning Lecture 10 Mixture Models II 30.11.2015 Bastian Leibe RWTH Aachen http://www.vision.rwthaachen.de/ Announcement Exercise sheet 2 online Sampling Rejection Sampling Importance
More informationCSE446: Clustering and EM Spring 2017
CSE446: Clustering and EM Spring 2017 Ali Farhadi Slides adapted from Carlos Guestrin, Dan Klein, and Luke Zettlemoyer Clustering systems: Unsupervised learning Clustering Detect patterns in unlabeled
More informationEM Algorithm LECTURE OUTLINE
EM Algorithm Lukáš Cerman, Václav Hlaváč Czech Technical University, Faculty of Electrical Engineering Department of Cybernetics, Center for Machine Perception 121 35 Praha 2, Karlovo nám. 13, Czech Republic
More informationMachine Learning Lecture 5
Machine Learning Lecture 5 Linear Discriminant Functions 26.10.2017 Bastian Leibe RWTH Aachen http://www.vision.rwthaachen.de leibe@vision.rwthaachen.de Course Outline Fundamentals Bayes Decision Theory
More informationMachine Learning for Data Science (CS4786) Lecture 12
Machine Learning for Data Science (CS4786) Lecture 12 Gaussian Mixture Models Course Webpage : http://www.cs.cornell.edu/courses/cs4786/2016fa/ Back to Kmeans Single link is sensitive to outliners We
More informationReview and Motivation
Review and Motivation We can model and visualize multimodal datasets by using multiple unimodal (Gaussianlike) clusters. Kmeans gives us a way of partitioning points into N clusters. Once we know which
More informationFINAL: CS 6375 (Machine Learning) Fall 2014
FINAL: CS 6375 (Machine Learning) Fall 2014 The exam is closed book. You are allowed a onepage cheat sheet. Answer the questions in the spaces provided on the question sheets. If you run out of room for
More informationKaggle.
Administrivia Miniproject 2 due April 7, in class implement multiclass reductions, naive bayes, kernel perceptron, multiclass logistic regression and two layer neural networks training set: Project
More informationGaussian and Linear Discriminant Analysis; Multiclass Classification
Gaussian and Linear Discriminant Analysis; Multiclass Classification Professor Ameet Talwalkar Slide Credit: Professor Fei Sha Professor Ameet Talwalkar CS260 Machine Learning Algorithms October 13, 2015
More informationMachine Learning. Gaussian Mixture Models. Zhiyao Duan & Bryan Pardo, Machine Learning: EECS 349 Fall
Machine Learning Gaussian Mixture Models Zhiyao Duan & Bryan Pardo, Machine Learning: EECS 349 Fall 2012 1 Discriminative vs Generative Models Discriminative: Just learn a decision boundary between your
More informationChapter 3: MaximumLikelihood & Bayesian Parameter Estimation (part 1)
HW 1 due today Parameter Estimation Biometrics CSE 190 Lecture 7 Today s lecture was on the blackboard. These slides are an alternative presentation of the material. CSE190, Winter10 CSE190, Winter10 Chapter
More informationMachine Learning for Signal Processing Bayes Classification and Regression
Machine Learning for Signal Processing Bayes Classification and Regression Instructor: Bhiksha Raj 11755/18797 1 Recap: KNN A very effective and simple way of performing classification Simple model: For
More informationSTATS 306B: Unsupervised Learning Spring Lecture 2 April 2
STATS 306B: Unsupervised Learning Spring 2014 Lecture 2 April 2 Lecturer: Lester Mackey Scribe: Junyang Qian, Minzhe Wang 2.1 Recap In the last lecture, we formulated our working definition of unsupervised
More informationIntroduction to Machine Learning
Introduction to Machine Learning Brown University CSCI 1950F, Spring 2012 Prof. Erik Sudderth Lecture 20: Expectation Maximization Algorithm EM for Mixture Models Many figures courtesy Kevin Murphy s
More informationMachine Learning, Fall 2012 Homework 2
060 Machine Learning, Fall 202 Homework 2 Instructors: Tom Mitchell, Ziv BarJoseph TA in charge: Selen Uguroglu email: sugurogl@cs.cmu.edu SOLUTIONS Naive Bayes, 20 points Problem. Basic concepts, 0
More informationCS6220: DATA MINING TECHNIQUES
CS6220: DATA MINING TECHNIQUES Matrix Data: Clustering: Part 2 Instructor: Yizhou Sun yzsun@ccs.neu.edu November 3, 2015 Methods to Learn Matrix Data Text Data Set Data Sequence Data Time Series Graph
More informationThe ExpectationMaximization Algorithm
1/29 EM & Latent Variable Models Gaussian Mixture Models EM Theory The ExpectationMaximization Algorithm Mihaela van der Schaar Department of Engineering Science University of Oxford MLE for Latent Variable
More informationMarkov Chains and Hidden Markov Models
Chapter 1 Markov Chains and Hidden Markov Models In this chapter, we will introduce the concept of Markov chains, and show how Markov chains can be used to model signals using structures such as hidden
More informationKmeans. Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University. November 19 th, Carlos Guestrin 1
EM Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University November 19 th, 2007 20052007 Carlos Guestrin 1 Kmeans 1. Ask user how many clusters they d like. e.g. k=5 2. Randomly guess
More informationBayesian Networks Structure Learning (cont.)
Koller & Friedman Chapters (handed out): Chapter 11 (short) Chapter 1: 1.1, 1., 1.3 (covered in the beginning of semester) 1.4 (Learning parameters for BNs) Chapter 13: 13.1, 13.3.1, 13.4.1, 13.4.3 (basic
More informationCS145: INTRODUCTION TO DATA MINING
CS145: INTRODUCTION TO DATA MINING Text Data: Topic Model Instructor: Yizhou Sun yzsun@cs.ucla.edu December 4, 2017 Methods to be Learnt Vector Data Set Data Sequence Data Text Data Classification Clustering
More informationExpectation maximization tutorial
Expectation maximization tutorial Octavian Ganea November 18, 2016 1/1 Today Expectation  maximization algorithm Topic modelling 2/1 ML & MAP Observed data: X = {x 1, x 2... x N } 3/1 ML & MAP Observed
More information9/12/17. Types of learning. Modeling data. Supervised learning: Classification. Supervised learning: Regression. Unsupervised learning: Clustering
Types of learning Modeling data Supervised: we know input and targets Goal is to learn a model that, given input data, accurately predicts target data Unsupervised: we know the input only and want to make
More informationIntroduction: MLE, MAP, Bayesian reasoning (28/8/13)
STA561: Probabilistic machine learning Introduction: MLE, MAP, Bayesian reasoning (28/8/13) Lecturer: Barbara Engelhardt Scribes: K. Ulrich, J. Subramanian, N. Raval, J. O Hollaren 1 Classifiers In this
More informationParametric Models. Dr. Shuang LIANG. School of Software Engineering TongJi University Fall, 2012
Parametric Models Dr. Shuang LIANG School of Software Engineering TongJi University Fall, 2012 Today s Topics Maximum Likelihood Estimation Bayesian Density Estimation Today s Topics Maximum Likelihood
More informationData Mining Techniques
Data Mining Techniques CS 6220  Section 2  Spring 2017 Lecture 6 JanWillem van de Meent (credit: Yijun Zhao, Chris Bishop, Andrew Moore, Hastie et al.) Project Project Deadlines 3 Feb: Form teams of
More informationCheng Soon Ong & Christian Walder. Canberra February June 2018
Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 218 Outlines Overview Introduction Linear Algebra Probability Linear Regression 1
More informationExpectation Maximization, and Learning from Partly Unobserved Data (part 2)
Expectation Maximization, and Learning from Partly Unobserved Data (part 2) Machine Learning 10701 April 2005 Tom M. Mitchell Carnegie Mellon University Clustering Outline K means EM: Mixture of Gaussians
More informationCS534 Machine Learning  Spring Final Exam
CS534 Machine Learning  Spring 2013 Final Exam Name: You have 110 minutes. There are 6 questions (8 pages including cover page). If you get stuck on one question, move on to others and come back to the
More informationLogistic Regression & Neural Networks
Logistic Regression & Neural Networks CMSC 723 / LING 723 / INST 725 Marine Carpuat Slides credit: Graham Neubig, Jacob Eisenstein Logistic Regression Perceptron & Probabilities What if we want a probability
More informationCSE 473: Artificial Intelligence Autumn Topics
CSE 473: Artificial Intelligence Autumn 2014 Bayesian Networks Learning II Dan Weld Slides adapted from Jack Breese, Dan Klein, Daphne Koller, Stuart Russell, Andrew Moore & Luke Zettlemoyer 1 473 Topics
More informationBayesian Methods: Naïve Bayes
Bayesian Methods: aïve Bayes icholas Ruozzi University of Texas at Dallas based on the slides of Vibhav Gogate Last Time Parameter learning Learning the parameter of a simple coin flipping model Prior
More informationIntroduction to Bayesian Learning. Machine Learning Fall 2018
Introduction to Bayesian Learning Machine Learning Fall 2018 1 What we have seen so far What does it mean to learn? Mistakedriven learning Learning by counting (and bounding) number of mistakes PAC learnability
More informationLatent Variable Models and EM Algorithm
SC4/SM8 Advanced Topics in Statistical Machine Learning Latent Variable Models and EM Algorithm Dino Sejdinovic Department of Statistics Oxford Slides and other materials available at: http://www.stats.ox.ac.uk/~sejdinov/atsml/
More informationGaussian Mixture Models
Gaussian Mixture Models David Rosenberg, Brett Bernstein New York University April 26, 2017 David Rosenberg, Brett Bernstein (New York University) DSGA 1003 April 26, 2017 1 / 42 Intro Question Intro
More informationParameter estimation Conditional risk
Parameter estimation Conditional risk Formalizing the problem Specify random variables we care about e.g., Commute Time e.g., Heights of buildings in a city We might then pick a particular distribution
More informationSTATS 306B: Unsupervised Learning Spring Lecture 3 April 7th
STATS 306B: Unsupervised Learning Spring 2014 Lecture 3 April 7th Lecturer: Lester Mackey Scribe: Jordan Bryan, Dangna Li 3.1 Recap: Gaussian Mixture Modeling In the last lecture, we discussed the Gaussian
More informationMLE/MAP + Naïve Bayes
10601 Introduction to Machine Learning Machine Learning Department School of Computer Science Carnegie Mellon University MLE/MAP + Naïve Bayes MLE / MAP Readings: Estimating Probabilities (Mitchell, 2016)
More informationSupport Vector Machines
Support Vector Machines Le Song Machine Learning I CSE 6740, Fall 2013 Naïve Bayes classifier Still use Bayes decision rule for classification P y x = P x y P y P x But assume p x y = 1 is fully factorized
More informationExpectation Maximization and Mixtures of Gaussians
Statistical Machine Learning Notes 10 Expectation Maximiation and Mixtures of Gaussians Instructor: Justin Domke Contents 1 Introduction 1 2 Preliminary: Jensen s Inequality 2 3 Expectation Maximiation
More informationGaussian Mixture Models, Expectation Maximization
Gaussian Mixture Models, Expectation Maximization Instructor: Jessica Wu Harvey Mudd College The instructor gratefully acknowledges Andrew Ng (Stanford), Andrew Moore (CMU), Eric Eaton (UPenn), David Kauchak
More informationA brief introduction to Conditional Random Fields
A brief introduction to Conditional Random Fields Mark Johnson Macquarie University April, 2005, updated October 2010 1 Talk outline Graphical models Maximum likelihood and maximum conditional likelihood
More informationMLE/MAP + Naïve Bayes
10601 Introduction to Machine Learning Machine Learning Department School of Computer Science Carnegie Mellon University MLE/MAP + Naïve Bayes Matt Gormley Lecture 19 March 20, 2018 1 Midterm Exam Reminders
More informationLecture 6: Gaussian Mixture Models (GMM)
Helsinki Institute for Information Technology Lecture 6: Gaussian Mixture Models (GMM) Pedram Daee 3.11.2015 Outline Gaussian Mixture Models (GMM) Models Model families and parameters Parameter learning
More informationPerceptron. Subhransu Maji. CMPSCI 689: Machine Learning. 3 February February 2015
Perceptron Subhransu Maji CMPSCI 689: Machine Learning 3 February 2015 5 February 2015 So far in the class Decision trees Inductive bias: use a combination of small number of features Nearest neighbor
More informationMachine Learning. Lecture 4: Regularization and Bayesian Statistics. Feng Li. https://funglee.github.io
Machine Learning Lecture 4: Regularization and Bayesian Statistics Feng Li fli@sdu.edu.cn https://funglee.github.io School of Computer Science and Technology Shandong University Fall 207 Overfitting Problem
More informationMachine Learning, Midterm Exam
10601 Machine Learning, Midterm Exam Instructors: Tom Mitchell, Ziv BarJoseph Wednesday 12 th December, 2012 There are 9 questions, for a total of 100 points. This exam has 20 pages, make sure you have
More informationMachine Learning for Signal Processing Bayes Classification
Machine Learning for Signal Processing Bayes Classification Class 16. 24 Oct 2017 Instructor: Bhiksha Raj  Abelino Jimenez 11755/18797 1 Recap: KNN A very effective and simple way of performing classification
More informationMachine Learning: A Statistics and Optimization Perspective
Machine Learning: A Statistics and Optimization Perspective Nan Ye Mathematical Sciences School Queensland University of Technology 1 / 109 What is Machine Learning? 2 / 109 Machine Learning Machine learning
More information6.047 / Computational Biology: Genomes, Networks, Evolution Fall 2008
MIT OpenCourseWare http://ocw.mit.edu 6.047 / 6.878 Computational Biology: Genomes, Networks, Evolution Fall 2008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
More informationStatistical learning. Chapter 20, Sections 1 4 1
Statistical learning Chapter 20, Sections 1 4 Chapter 20, Sections 1 4 1 Outline Bayesian learning Maximum a posteriori and maximum likelihood learning Bayes net learning ML parameter learning with complete
More informationCSC411: Final Review. James Lucas & David Madras. December 3, 2018
CSC411: Final Review James Lucas & David Madras December 3, 2018 Agenda 1. A brief overview 2. Some sample questions Basic ML Terminology The final exam will be on the entire course; however, it will be
More informationNaïve Bayes Introduction to Machine Learning. Matt Gormley Lecture 18 Oct. 31, 2018
10601 Introduction to Machine Learning Machine Learning Department School of Computer Science Carnegie Mellon University Naïve Bayes Matt Gormley Lecture 18 Oct. 31, 2018 1 Reminders Homework 6: PAC Learning
More informationSTA 414/2104: Machine Learning
STA 414/2104: Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistics! rsalakhu@cs.toronto.edu! http://www.cs.toronto.edu/~rsalakhu/ Lecture 9 Sequential Data So far
More informationLecture 3: Machine learning, classification, and generative models
EE E6820: Speech & Audio Processing & Recognition Lecture 3: Machine learning, classification, and generative models 1 Classification 2 Generative models 3 Gaussian models Michael Mandel
More informationMACHINE LEARNING AND PATTERN RECOGNITION Fall 2006, Lecture 8: Latent Variables, EM Yann LeCun
Y. LeCun: Machine Learning and Pattern Recognition p. 1/? MACHINE LEARNING AND PATTERN RECOGNITION Fall 2006, Lecture 8: Latent Variables, EM Yann LeCun The Courant Institute, New York University http://yann.lecun.com
More informationLinear Dynamical Systems
Linear Dynamical Systems Sargur N. srihari@cedar.buffalo.edu Machine Learning Course: http://www.cedar.buffalo.edu/~srihari/cse574/index.html Two Models Described by Same Graph Latent variables Observations
More informationFINAL EXAM: FALL 2013 CS 6375 INSTRUCTOR: VIBHAV GOGATE
FINAL EXAM: FALL 2013 CS 6375 INSTRUCTOR: VIBHAV GOGATE You are allowed a twopage cheat sheet. You are also allowed to use a calculator. Answer the questions in the spaces provided on the question sheets.
More informationThe Naïve Bayes Classifier. Machine Learning Fall 2017
The Naïve Bayes Classifier Machine Learning Fall 2017 1 Today s lecture The naïve Bayes Classifier Learning the naïve Bayes Classifier Practical concerns 2 Today s lecture The naïve Bayes Classifier Learning
More informationIntro. ANN & Fuzzy Systems. Lecture 15. Pattern Classification (I): Statistical Formulation
Lecture 15. Pattern Classification (I): Statistical Formulation Outline Statistical Pattern Recognition Maximum Posterior Probability (MAP) Classifier Maximum Likelihood (ML) Classifier KNearest Neighbor
More informationLogistic Regression Review Fall 2012 Recitation. September 25, 2012 TA: Selen Uguroglu
Logistic Regression Review 10601 Fall 2012 Recitation September 25, 2012 TA: Selen Uguroglu!1 Outline Decision Theory Logistic regression Goal Loss function Inference Gradient Descent!2 Training Data
More informationBiasVariance Tradeoff
What s learning, revisited Overfitting Generative versus Discriminative Logistic Regression Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University September 19 th, 2007 BiasVariance Tradeoff
More informationIntroduction to Graphical Models
Introduction to Graphical Models The 15 th Winter School of Statistical Physics POSCO International Center & POSTECH, Pohang 2018. 1. 9 (Tue.) YungKyun Noh GENERALIZATION FOR PREDICTION 2 Probabilistic
More informationLogistic Regression. Professor Ameet Talwalkar. Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, / 48
Logistic Regression Professor Ameet Talwalkar Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 1 / 48 Outline 1 Administration 2 Review of last lecture 3 Logistic regression
More informationDensity Estimation. Seungjin Choi
Density Estimation Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongamro, Namgu, Pohang 37673, Korea seungjin@postech.ac.kr http://mlg.postech.ac.kr/
More informationNonParametric Bayes
NonParametric Bayes Mark Schmidt UBC Machine Learning Reading Group January 2016 Current Hot Topics in Machine Learning Bayesian learning includes: Gaussian processes. Approximate inference. Bayesian
More informationSTA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 11 Project
More informationCS Lecture 18. Topic Models and LDA
CS 6347 Lecture 18 Topic Models and LDA (some slides by David Blei) Generative vs. Discriminative Models Recall that, in Bayesian networks, there could be many different, but equivalent models of the same
More information