Fast Approximate MAP Inference for Bayesian Nonparametrics
|
|
- Edwin Johnston
- 5 years ago
- Views:
Transcription
1 Fast Approximate MAP Inference for Bayesian Nonparametrics Y. Raykov A. Boukouvalas M.A. Little Department of Mathematics Aston University 10th Conference on Bayesian Nonparametrics, 2015
2 1 Iterated Conditional Modes 2 Dirichlet Process Mixtures MAP-DP Experiments and Results 3 Infinite Hidden Markov Model(HDP-iHMM)
3 Iterated Conditional Modes For PGMs is a deterministic algorithm that maximizes the conditional distribution of each random variable while holding the rest fixed Finds an approximation of the MAP solution for the joint distribution over all the random variables in the PGM Cheap alternative to sampling approach The exact equivalent to the simulated annealing at 0-temperature state
4 Iterated Conditional Modes For PGMs is a deterministic algorithm that maximizes the conditional distribution of each random variable while holding the rest fixed Finds an approximation of the MAP solution for the joint distribution over all the random variables in the PGM Cheap alternative to sampling approach The exact equivalent to the simulated annealing at 0-temperature state
5 Iterated Conditional Modes For PGMs is a deterministic algorithm that maximizes the conditional distribution of each random variable while holding the rest fixed Finds an approximation of the MAP solution for the joint distribution over all the random variables in the PGM Cheap alternative to sampling approach The exact equivalent to the simulated annealing at 0-temperature state
6 Iterated Conditional Modes For PGMs is a deterministic algorithm that maximizes the conditional distribution of each random variable while holding the rest fixed Finds an approximation of the MAP solution for the joint distribution over all the random variables in the PGM Cheap alternative to sampling approach The exact equivalent to the simulated annealing at 0-temperature state
7 Iterated Conditional Modes For PGMs is a deterministic algorithm that maximizes the conditional distribution of each random variable while holding the rest fixed Finds an approximation of the MAP solution for the joint distribution over all the random variables in the PGM Cheap alternative to sampling approach The exact equivalent to the simulated annealing at 0-temperature state
8 Notation and model Spherical model: µ k N (µ 0, s 0 I) p Dir (a 1,..., a K ) z 1,..., z N Categorical (p) x 1,..., x N N µ zi si Figure : Bayesian mixture model Bayesian Spherical GMM Negative log-likelihood: log p (X, Z...) = N Â Â i=1 k:z i =k x i s µ k 2 2 log p k P 0 k with prior and constant terms P 0 k = log a k + log p µ 0 k + C.
9 Notation and model Spherical model: µ k N (µ 0, s 0 I) p Dir (a 1,..., a K ) z 1,..., z N Categorical (p) x 1,..., x N N µ zi si Figure : Bayesian mixture model Bayesian Spherical GMM Negative log-likelihood: log p (X, Z...) = N Â Â i=1 k:z i =k x i s µ k 2 2 log p k P 0 k with prior and constant terms P 0 k = log a k + log p µ 0 k + C.
10
11 MAP-DP MAP problem Iterated Conditional Modes: arg min Z,µ,p N Â Â i=1 k:z i =k x i s µ k 2 2 log p k P 0 k Compute the assignments: q i,k = log p k x i µ k s z i = arg min k2{1,...,k} q i,k Update cluster means µ 1,..., µ K taking the mode of the posterior; Update cluster weights from: p k = N k +a k 1 N for k = 1,..., K. 2 2 K-means: arg min Z,µ N Â i=1 Â k:z i =k kx i µ k k 2 2
12 MAP-DP 1 Iterated Conditional Modes 2 Dirichlet Process Mixtures MAP-DP Experiments and Results 3 Infinite Hidden Markov Model(HDP-iHMM)
13 MAP-DP Underlying model Fully collapsed DP mixture model Underlying model: z 1,..., z N CRP (a, N) x i F q i z i for all i = 1,..., N Figure : Collapsed DP mixture model
14 MAP-DP Underlying model Fully collapsed DP mixture model Underlying model: z 1,..., z N CRP (a, N) x i F q i z i for all i = 1,..., N Figure : Collapsed DP mixture model
15 MAP-DP DP mixtures MAP-DP Objective function: arg min Z,K N   i=1 k:z i =k log p x i qz i K + i  log G (N k ) k=1 P 0 k with prior term P 0 k = log a + log p q0 k. Keeping N k, compute: i and q i updated, for each observation q i,k = log N k, i log p x i q i k q i,k+1 = log a log p (x i q 0 ) z i = arg min q i,k k2{1,...,k + +1}
16 MAP-DP
17 MAP-DP Small Variance Asymptotics DP-means Objective function: arg min Z,µ,K N Â i=1 Â k:z i =k kx i µ k k 2 2 +lk Compute for each observation: z i = q i,k = kx i µ k k 2 2 q i,k+1 = l arg min q i,k k2{1,...,k+1}
18 MAP-DP Comparison of MAP-DP and DP-means Similarities Both provide approximately optimal clustering Both are fast and scalable Both are non-parametric Advantages of MAP-DP over DP-means Retains the reinforcement e ect No degeneracy in the likelihood Rigorous way of choosing the concentration parameter a Prior keeps influence on the objective function Principled way to handle non-spherical and missing data
19 MAP-DP Comparison of MAP-DP and DP-means Similarities Both provide approximately optimal clustering Both are fast and scalable Both are non-parametric Advantages of MAP-DP over DP-means Retains the reinforcement e ect No degeneracy in the likelihood Rigorous way of choosing the concentration parameter a Prior keeps influence on the objective function Principled way to handle non-spherical and missing data
20 MAP-DP Comparison of MAP-DP and DP-means Similarities Both provide approximately optimal clustering Both are fast and scalable Both are non-parametric Advantages of MAP-DP over DP-means Retains the reinforcement e ect No degeneracy in the likelihood Rigorous way of choosing the concentration parameter a Prior keeps influence on the objective function Principled way to handle non-spherical and missing data
21 MAP-DP Figure : Association chart of ICM and SVA algorithms
22 Experiments and Results 1 Iterated Conditional Modes 2 Dirichlet Process Mixtures MAP-DP Experiments and Results 3 Infinite Hidden Markov Model(HDP-iHMM)
23 Experiments and Results Synthetic Study CRP mixture data Sample cluster indicators: z 1,..., z N CRP (a, N) Sample K + cluster parameters: {µ k, S k } NW(q 0 ) For each k, samplen k observations: x i N(µ k, S K ) Figure : Synthetically-generated CRP mixture data
24 Experiments and Results Synthetic Study CRP mixture data Sample cluster indicators: z 1,..., z N CRP (a, N) Sample K + cluster parameters: {µ k, S k } NW(q 0 ) For each k, samplen k observations: x i N(µ k, S K ) Figure : Synthetically-generated CRP mixture data
25 Experiments and Results Synthetic study Gibbs MAP-DP DP-means NMI 0.81(0.1) 0.82(0.1) 0.68(0.1) Iterations 1395(651) 10(3) 18(7) DK 3.6(3.0) 6.6(2.9) 0.0
26 Experiments and Results Case study Parkinson s Disease (PD) Data Organizing Center Database Aim of the study: Exploring PD sub-types using PD-DOC database. Data from 527 patients, 285 features with missing data Categorical, Poisson and Binomial Data
27 Experiments and Results Case study Results 3 main equally-sized clusters suggesting di erent PD sub-types Examples of features that separate clusters: Feature Cluster 1 Cluster 2 Cluster 3 Sleep Disturbance* Right leg agility* Risk of stroke 4% 15% 6% * Ratio of a ected to non-a ected patients.
28 Infinite Hidden Markov Model ihmm Each row in the transition matrix is a DP p (x t z t 1 ) = Â z t p zt,z t 1 p (x t z t )
29 Synthetic study HMM with spherical Gaussian emissions Sample 4000 data points from a HMM with spherical emissions: N (µ 1, si 3 ),..., N (µ 5, si 3 ) with 0.96 probability of self-transition and 0.01 probability for each of the remaining transitions Gibbs MAP-iHMM SVA-iHMM NMI Iterations
30 Synthetic study HMM with spherical Gaussian emissions Sample 4000 data points from a HMM with spherical emissions: N (µ 1, si 3 ),..., N (µ 5, si 3 ) with 0.96 probability of self-transition and 0.01 probability for each of the remaining transitions Gibbs MAP-iHMM SVA-iHMM NMI Iterations
31 Summary ICM breaks a lot of the Bayesian advantages of BNP models: Does not average over the uncertain variables Obtains only a point estimate of the joint posterior Underestimates the variance and fails to extract information from the tails of the true underlying distribution Nevertheless the suggested methods obtain statistically principled approximate solution of the MAP problem with little computational e ort involved. Results are easy to interpret and convergence to a local solution is guaranteed. The MAP schemes suggest a way to fit complex BNP models to at least moderately big problems Applying ICM on the non-degenerate likelihood function preserves some of the essential properties of the model
32 Appendix For Further Reading Relevant work Simple approximate MAP Inference for Dirichlet processes (Y. Raykov, A. Boukouvalas and M. A. Little) Fast search for Dirichlet process mixture models (H. Daume III, 2007) Scaling the Indian Bu et Process via Submodular Maximization (C. Reed and Z. Ghahramani, 2013) Fast Bayesian Inference in Dirichlet Process Mixtrure Models (L. Wang and D. B. Dunson, 2011) Revisiting k-means: New Algorithms via Bayesian Nonparametrics (B. Kulis and M. I. Jordan, 2012) MAD-Bayes: MAP-based Asymptotic Derivations from Bayes (T. Broderick, B. Kulis and M. I. Jordan, 2013)
Small-variance Asymptotics for Dirichlet Process Mixtures of SVMs
Small-variance Asymptotics for Dirichlet Process Mixtures of SVMs Yining Wang Jun Zhu Tsinghua University July, 2014 Y. Wang and J. Zhu (Tsinghua University) Max-Margin DP-means July, 2014 1 / 25 Outline
More informationSimple approximate MAP inference for Dirichlet processes mixtures
Vol. 0 (2015) 1 8 Simple approximate MAP inference for Dirichlet processes mixtures Yordan P. Raykov Aston University e-mail: yordan.raykov@gmail.com Alexis Boukouvalas University of Manchester e-mail:
More informationOutline. Binomial, Multinomial, Normal, Beta, Dirichlet. Posterior mean, MAP, credible interval, posterior distribution
Outline A short review on Bayesian analysis. Binomial, Multinomial, Normal, Beta, Dirichlet Posterior mean, MAP, credible interval, posterior distribution Gibbs sampling Revisit the Gaussian mixture model
More informationMAD-Bayes: MAP-based Asymptotic Derivations from Bayes
MAD-Bayes: MAP-based Asymptotic Derivations from Bayes Tamara Broderick Brian Kulis Michael I. Jordan Cat Clusters Mouse clusters Dog 1 Cat Clusters Dog Mouse Lizard Sheep Picture 1 Picture 2 Picture 3
More informationNon-Parametric Bayes
Non-Parametric Bayes Mark Schmidt UBC Machine Learning Reading Group January 2016 Current Hot Topics in Machine Learning Bayesian learning includes: Gaussian processes. Approximate inference. Bayesian
More informationBayesian Nonparametric Models
Bayesian Nonparametric Models David M. Blei Columbia University December 15, 2015 Introduction We have been looking at models that posit latent structure in high dimensional data. We use the posterior
More informationBayesian Methods for Machine Learning
Bayesian Methods for Machine Learning CS 584: Big Data Analytics Material adapted from Radford Neal s tutorial (http://ftp.cs.utoronto.ca/pub/radford/bayes-tut.pdf), Zoubin Ghahramni (http://hunch.net/~coms-4771/zoubin_ghahramani_bayesian_learning.pdf),
More informationComputational Biology Lecture #3: Probability and Statistics. Bud Mishra Professor of Computer Science, Mathematics, & Cell Biology Sept
Computational Biology Lecture #3: Probability and Statistics Bud Mishra Professor of Computer Science, Mathematics, & Cell Biology Sept 26 2005 L2-1 Basic Probabilities L2-2 1 Random Variables L2-3 Examples
More informationProbabilistic Time Series Classification
Probabilistic Time Series Classification Y. Cem Sübakan Boğaziçi University 25.06.2013 Y. Cem Sübakan (Boğaziçi University) M.Sc. Thesis Defense 25.06.2013 1 / 54 Problem Statement The goal is to assign
More informationBayesian Nonparametrics for Speech and Signal Processing
Bayesian Nonparametrics for Speech and Signal Processing Michael I. Jordan University of California, Berkeley June 28, 2011 Acknowledgments: Emily Fox, Erik Sudderth, Yee Whye Teh, and Romain Thibaux Computer
More informationSTA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 7 Approximate
More informationIntroduction to Machine Learning
Introduction to Machine Learning Brown University CSCI 1950-F, Spring 2012 Prof. Erik Sudderth Lecture 25: Markov Chain Monte Carlo (MCMC) Course Review and Advanced Topics Many figures courtesy Kevin
More informationIntroduction to Machine Learning. Maximum Likelihood and Bayesian Inference. Lecturers: Eran Halperin, Lior Wolf
1 Introduction to Machine Learning Maximum Likelihood and Bayesian Inference Lecturers: Eran Halperin, Lior Wolf 2014-15 We know that X ~ B(n,p), but we do not know p. We get a random sample from X, a
More informationParametric Unsupervised Learning Expectation Maximization (EM) Lecture 20.a
Parametric Unsupervised Learning Expectation Maximization (EM) Lecture 20.a Some slides are due to Christopher Bishop Limitations of K-means Hard assignments of data points to clusters small shift of a
More informationStat 5101 Lecture Notes
Stat 5101 Lecture Notes Charles J. Geyer Copyright 1998, 1999, 2000, 2001 by Charles J. Geyer May 7, 2001 ii Stat 5101 (Geyer) Course Notes Contents 1 Random Variables and Change of Variables 1 1.1 Random
More information13: Variational inference II
10-708: Probabilistic Graphical Models, Spring 2015 13: Variational inference II Lecturer: Eric P. Xing Scribes: Ronghuo Zheng, Zhiting Hu, Yuntian Deng 1 Introduction We started to talk about variational
More informationBayesian non parametric approaches: an introduction
Introduction Latent class models Latent feature models Conclusion & Perspectives Bayesian non parametric approaches: an introduction Pierre CHAINAIS Bordeaux - nov. 2012 Trajectory 1 Bayesian non parametric
More informationPart IV: Monte Carlo and nonparametric Bayes
Part IV: Monte Carlo and nonparametric Bayes Outline Monte Carlo methods Nonparametric Bayesian models Outline Monte Carlo methods Nonparametric Bayesian models The Monte Carlo principle The expectation
More informationLecture 4: Probabilistic Learning
DD2431 Autumn, 2015 1 Maximum Likelihood Methods Maximum A Posteriori Methods Bayesian methods 2 Classification vs Clustering Heuristic Example: K-means Expectation Maximization 3 Maximum Likelihood Methods
More informationNonparametric Bayes Density Estimation and Regression with High Dimensional Data
Nonparametric Bayes Density Estimation and Regression with High Dimensional Data Abhishek Bhattacharya, Garritt Page Department of Statistics, Duke University Joint work with Prof. D.Dunson September 2010
More informationClustering using Mixture Models
Clustering using Mixture Models The full posterior of the Gaussian Mixture Model is p(x, Z, µ,, ) =p(x Z, µ, )p(z )p( )p(µ, ) data likelihood (Gaussian) correspondence prob. (Multinomial) mixture prior
More informationPATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 2: PROBABILITY DISTRIBUTIONS
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 2: PROBABILITY DISTRIBUTIONS Parametric Distributions Basic building blocks: Need to determine given Representation: or? Recall Curve Fitting Binary Variables
More informationIntroduction to Machine Learning. Maximum Likelihood and Bayesian Inference. Lecturers: Eran Halperin, Yishay Mansour, Lior Wolf
1 Introduction to Machine Learning Maximum Likelihood and Bayesian Inference Lecturers: Eran Halperin, Yishay Mansour, Lior Wolf 2013-14 We know that X ~ B(n,p), but we do not know p. We get a random sample
More informationDensity Estimation. Seungjin Choi
Density Estimation Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr http://mlg.postech.ac.kr/
More informationInfinite latent feature models and the Indian Buffet Process
p.1 Infinite latent feature models and the Indian Buffet Process Tom Griffiths Cognitive and Linguistic Sciences Brown University Joint work with Zoubin Ghahramani p.2 Beyond latent classes Unsupervised
More informationExpectation maximization
Expectation maximization Subhransu Maji CMSCI 689: Machine Learning 14 April 2015 Motivation Suppose you are building a naive Bayes spam classifier. After your are done your boss tells you that there is
More informationApplied Bayesian Nonparametrics 3. Infinite Hidden Markov Models
Applied Bayesian Nonparametrics 3. Infinite Hidden Markov Models Tutorial at CVPR 2012 Erik Sudderth Brown University Work by E. Fox, E. Sudderth, M. Jordan, & A. Willsky AOAS 2011: A Sticky HDP-HMM with
More informationBayesian Models in Machine Learning
Bayesian Models in Machine Learning Lukáš Burget Escuela de Ciencias Informáticas 2017 Buenos Aires, July 24-29 2017 Frequentist vs. Bayesian Frequentist point of view: Probability is the frequency of
More informationLecture 13 : Variational Inference: Mean Field Approximation
10-708: Probabilistic Graphical Models 10-708, Spring 2017 Lecture 13 : Variational Inference: Mean Field Approximation Lecturer: Willie Neiswanger Scribes: Xupeng Tong, Minxing Liu 1 Problem Setup 1.1
More informationBayesian Hidden Markov Models and Extensions
Bayesian Hidden Markov Models and Extensions Zoubin Ghahramani Department of Engineering University of Cambridge joint work with Matt Beal, Jurgen van Gael, Yunus Saatci, Tom Stepleton, Yee Whye Teh Modeling
More informationLearning ancestral genetic processes using nonparametric Bayesian models
Learning ancestral genetic processes using nonparametric Bayesian models Kyung-Ah Sohn October 31, 2011 Committee Members: Eric P. Xing, Chair Zoubin Ghahramani Russell Schwartz Kathryn Roeder Matthew
More informationA deterministic inference framework for discrete nonparametric latent variable models
A deterministic inference framewor for discrete nonparametric latent variable models Learning complex probabilistic models with simple algorithms Yordan P. Rayov Doctor of Philosophy ASTON UNIVERSITY January
More informationUnsupervised Learning
Unsupervised Learning Bayesian Model Comparison Zoubin Ghahramani zoubin@gatsby.ucl.ac.uk Gatsby Computational Neuroscience Unit, and MSc in Intelligent Systems, Dept Computer Science University College
More informationPerformance Comparison of K-Means and Expectation Maximization with Gaussian Mixture Models for Clustering EE6540 Final Project
Performance Comparison of K-Means and Expectation Maximization with Gaussian Mixture Models for Clustering EE6540 Final Project Devin Cornell & Sushruth Sastry May 2015 1 Abstract In this article, we explore
More informationExpectation propagation for infinite mixtures (Extended abstract) Thomas Minka and Zoubin Ghahramani December 17, 2003
Expectation propagation for infinite mixtures (Extended abstract) Thomas Minka and Zoubin Ghahramani December 7, 003 Abstract This note describes a method for approximate inference in infinite models that
More informationLecture 4: Probabilistic Learning. Estimation Theory. Classification with Probability Distributions
DD2431 Autumn, 2014 1 2 3 Classification with Probability Distributions Estimation Theory Classification in the last lecture we assumed we new: P(y) Prior P(x y) Lielihood x2 x features y {ω 1,..., ω K
More informationMachine Learning. Gaussian Mixture Models. Zhiyao Duan & Bryan Pardo, Machine Learning: EECS 349 Fall
Machine Learning Gaussian Mixture Models Zhiyao Duan & Bryan Pardo, Machine Learning: EECS 349 Fall 2012 1 The Generative Model POV We think of the data as being generated from some process. We assume
More informationBayesian Nonparametrics: Models Based on the Dirichlet Process
Bayesian Nonparametrics: Models Based on the Dirichlet Process Alessandro Panella Department of Computer Science University of Illinois at Chicago Machine Learning Seminar Series February 18, 2013 Alessandro
More informationA Process over all Stationary Covariance Kernels
A Process over all Stationary Covariance Kernels Andrew Gordon Wilson June 9, 0 Abstract I define a process over all stationary covariance kernels. I show how one might be able to perform inference that
More informationProbabilistic Graphical Models
School of Computer Science Probabilistic Graphical Models Infinite Feature Models: The Indian Buffet Process Eric Xing Lecture 21, April 2, 214 Acknowledgement: slides first drafted by Sinead Williamson
More informationVariational Bayesian Dirichlet-Multinomial Allocation for Exponential Family Mixtures
17th Europ. Conf. on Machine Learning, Berlin, Germany, 2006. Variational Bayesian Dirichlet-Multinomial Allocation for Exponential Family Mixtures Shipeng Yu 1,2, Kai Yu 2, Volker Tresp 2, and Hans-Peter
More information13 : Variational Inference: Loopy Belief Propagation and Mean Field
10-708: Probabilistic Graphical Models 10-708, Spring 2012 13 : Variational Inference: Loopy Belief Propagation and Mean Field Lecturer: Eric P. Xing Scribes: Peter Schulam and William Wang 1 Introduction
More informationBayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2016
Bayesian Networks: Construction, Inference, Learning and Causal Interpretation Volker Tresp Summer 2016 1 Introduction So far we were mostly concerned with supervised learning: we predicted one or several
More informationComputer Vision Group Prof. Daniel Cremers. 10a. Markov Chain Monte Carlo
Group Prof. Daniel Cremers 10a. Markov Chain Monte Carlo Markov Chain Monte Carlo In high-dimensional spaces, rejection sampling and importance sampling are very inefficient An alternative is Markov Chain
More informationConstruction of Dependent Dirichlet Processes based on Poisson Processes
1 / 31 Construction of Dependent Dirichlet Processes based on Poisson Processes Dahua Lin Eric Grimson John Fisher CSAIL MIT NIPS 2010 Outstanding Student Paper Award Presented by Shouyuan Chen Outline
More informationCOMS 4721: Machine Learning for Data Science Lecture 16, 3/28/2017
COMS 4721: Machine Learning for Data Science Lecture 16, 3/28/2017 Prof. John Paisley Department of Electrical Engineering & Data Science Institute Columbia University SOFT CLUSTERING VS HARD CLUSTERING
More informationMachine Learning Overview
Machine Learning Overview Sargur N. Srihari University at Buffalo, State University of New York USA 1 Outline 1. What is Machine Learning (ML)? 2. Types of Information Processing Problems Solved 1. Regression
More informationAnswers and expectations
Answers and expectations For a function f(x) and distribution P(x), the expectation of f with respect to P is The expectation is the average of f, when x is drawn from the probability distribution P E
More informationLatent Variable Models
Latent Variable Models Stefano Ermon, Aditya Grover Stanford University Lecture 5 Stefano Ermon, Aditya Grover (AI Lab) Deep Generative Models Lecture 5 1 / 31 Recap of last lecture 1 Autoregressive models:
More informationExpectation Maximization
Expectation Maximization Aaron C. Courville Université de Montréal Note: Material for the slides is taken directly from a presentation prepared by Christopher M. Bishop Learning in DAGs Two things could
More informationp L yi z n m x N n xi
y i z n x n N x i Overview Directed and undirected graphs Conditional independence Exact inference Latent variables and EM Variational inference Books statistical perspective Graphical Models, S. Lauritzen
More informationCPSC 540: Machine Learning
CPSC 540: Machine Learning MCMC and Non-Parametric Bayes Mark Schmidt University of British Columbia Winter 2016 Admin I went through project proposals: Some of you got a message on Piazza. No news is
More informationComputer Vision Group Prof. Daniel Cremers. 11. Sampling Methods: Markov Chain Monte Carlo
Group Prof. Daniel Cremers 11. Sampling Methods: Markov Chain Monte Carlo Markov Chain Monte Carlo In high-dimensional spaces, rejection sampling and importance sampling are very inefficient An alternative
More information9/12/17. Types of learning. Modeling data. Supervised learning: Classification. Supervised learning: Regression. Unsupervised learning: Clustering
Types of learning Modeling data Supervised: we know input and targets Goal is to learn a model that, given input data, accurately predicts target data Unsupervised: we know the input only and want to make
More informationAlgorithmisches Lernen/Machine Learning
Algorithmisches Lernen/Machine Learning Part 1: Stefan Wermter Introduction Connectionist Learning (e.g. Neural Networks) Decision-Trees, Genetic Algorithms Part 2: Norman Hendrich Support-Vector Machines
More informationBayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2014
Bayesian Networks: Construction, Inference, Learning and Causal Interpretation Volker Tresp Summer 2014 1 Introduction So far we were mostly concerned with supervised learning: we predicted one or several
More informationNonparametric Bayesian Models for Sparse Matrices and Covariances
Nonparametric Bayesian Models for Sparse Matrices and Covariances Zoubin Ghahramani Department of Engineering University of Cambridge, UK zoubin@eng.cam.ac.uk http://learning.eng.cam.ac.uk/zoubin/ Bayes
More informationComputer Vision Group Prof. Daniel Cremers. 14. Sampling Methods
Prof. Daniel Cremers 14. Sampling Methods Sampling Methods Sampling Methods are widely used in Computer Science as an approximation of a deterministic algorithm to represent uncertainty without a parametric
More informationA Brief Overview of Nonparametric Bayesian Models
A Brief Overview of Nonparametric Bayesian Models Eurandom Zoubin Ghahramani Department of Engineering University of Cambridge, UK zoubin@eng.cam.ac.uk http://learning.eng.cam.ac.uk/zoubin Also at Machine
More informationLatent Variable View of EM. Sargur Srihari
Latent Variable View of EM Sargur srihari@cedar.buffalo.edu 1 Examples of latent variables 1. Mixture Model Joint distribution is p(x,z) We don t have values for z 2. Hidden Markov Model A single time
More informationIntroduction to Machine Learning
Introduction to Machine Learning Brown University CSCI 1950-F, Spring 2012 Prof. Erik Sudderth Lecture 20: Expectation Maximization Algorithm EM for Mixture Models Many figures courtesy Kevin Murphy s
More informationLecture 16-17: Bayesian Nonparametrics I. STAT 6474 Instructor: Hongxiao Zhu
Lecture 16-17: Bayesian Nonparametrics I STAT 6474 Instructor: Hongxiao Zhu Plan for today Why Bayesian Nonparametrics? Dirichlet Distribution and Dirichlet Processes. 2 Parameter and Patterns Reference:
More informationClustering K-means. Clustering images. Machine Learning CSE546 Carlos Guestrin University of Washington. November 4, 2014.
Clustering K-means Machine Learning CSE546 Carlos Guestrin University of Washington November 4, 2014 1 Clustering images Set of Images [Goldberger et al.] 2 1 K-means Randomly initialize k centers µ (0)
More informationSpatial Bayesian Nonparametrics for Natural Image Segmentation
Spatial Bayesian Nonparametrics for Natural Image Segmentation Erik Sudderth Brown University Joint work with Michael Jordan University of California Soumya Ghosh Brown University Parsing Visual Scenes
More informationNon-parametric Bayesian Modeling and Fusion of Spatio-temporal Information Sources
th International Conference on Information Fusion Chicago, Illinois, USA, July -8, Non-parametric Bayesian Modeling and Fusion of Spatio-temporal Information Sources Priyadip Ray Department of Electrical
More informationLecture 9. Time series prediction
Lecture 9 Time series prediction Prediction is about function fitting To predict we need to model There are a bewildering number of models for data we look at some of the major approaches in this lecture
More informationLecture 6: Gaussian Mixture Models (GMM)
Helsinki Institute for Information Technology Lecture 6: Gaussian Mixture Models (GMM) Pedram Daee 3.11.2015 Outline Gaussian Mixture Models (GMM) Models Model families and parameters Parameter learning
More informationSum-Product Networks. STAT946 Deep Learning Guest Lecture by Pascal Poupart University of Waterloo October 17, 2017
Sum-Product Networks STAT946 Deep Learning Guest Lecture by Pascal Poupart University of Waterloo October 17, 2017 Introduction Outline What is a Sum-Product Network? Inference Applications In more depth
More informationBayesian Machine Learning
Bayesian Machine Learning Andrew Gordon Wilson ORIE 6741 Lecture 4 Occam s Razor, Model Construction, and Directed Graphical Models https://people.orie.cornell.edu/andrew/orie6741 Cornell University September
More informationQuantitative Biology II Lecture 4: Variational Methods
10 th March 2015 Quantitative Biology II Lecture 4: Variational Methods Gurinder Singh Mickey Atwal Center for Quantitative Biology Cold Spring Harbor Laboratory Image credit: Mike West Summary Approximate
More informationPriors for Random Count Matrices with Random or Fixed Row Sums
Priors for Random Count Matrices with Random or Fixed Row Sums Mingyuan Zhou Joint work with Oscar Madrid and James Scott IROM Department, McCombs School of Business Department of Statistics and Data Sciences
More informationImage segmentation combining Markov Random Fields and Dirichlet Processes
Image segmentation combining Markov Random Fields and Dirichlet Processes Jessica SODJO IMS, Groupe Signal Image, Talence Encadrants : A. Giremus, J.-F. Giovannelli, F. Caron, N. Dobigeon Jessica SODJO
More informationSTA 414/2104: Machine Learning
STA 414/2104: Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistics! rsalakhu@cs.toronto.edu! http://www.cs.toronto.edu/~rsalakhu/ Lecture 9 Sequential Data So far
More informationPATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 13: SEQUENTIAL DATA
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 13: SEQUENTIAL DATA Contents in latter part Linear Dynamical Systems What is different from HMM? Kalman filter Its strength and limitation Particle Filter
More informationNonparametric Bayesian Methods: Models, Algorithms, and Applications (Day 5)
Nonparametric Bayesian Methods: Models, Algorithms, and Applications (Day 5) Tamara Broderick ITT Career Development Assistant Professor Electrical Engineering & Computer Science MIT Bayes Foundations
More informationPreface Introduction to Statistics and Data Analysis Overview: Statistical Inference, Samples, Populations, and Experimental Design The Role of
Preface Introduction to Statistics and Data Analysis Overview: Statistical Inference, Samples, Populations, and Experimental Design The Role of Probability Sampling Procedures Collection of Data Measures
More informationMachine Learning 4771
Machine Learning 4771 Instructor: Tony Jebara Topic 11 Maximum Likelihood as Bayesian Inference Maximum A Posteriori Bayesian Gaussian Estimation Why Maximum Likelihood? So far, assumed max (log) likelihood
More informationGaussian Mixture Models, Expectation Maximization
Gaussian Mixture Models, Expectation Maximization Instructor: Jessica Wu Harvey Mudd College The instructor gratefully acknowledges Andrew Ng (Stanford), Andrew Moore (CMU), Eric Eaton (UPenn), David Kauchak
More informationHaupthseminar: Machine Learning. Chinese Restaurant Process, Indian Buffet Process
Haupthseminar: Machine Learning Chinese Restaurant Process, Indian Buffet Process Agenda Motivation Chinese Restaurant Process- CRP Dirichlet Process Interlude on CRP Infinite and CRP mixture model Estimation
More informationRecent Advances in Bayesian Inference Techniques
Recent Advances in Bayesian Inference Techniques Christopher M. Bishop Microsoft Research, Cambridge, U.K. research.microsoft.com/~cmbishop SIAM Conference on Data Mining, April 2004 Abstract Bayesian
More informationContents. Part I: Fundamentals of Bayesian Inference 1
Contents Preface xiii Part I: Fundamentals of Bayesian Inference 1 1 Probability and inference 3 1.1 The three steps of Bayesian data analysis 3 1.2 General notation for statistical inference 4 1.3 Bayesian
More informationSTA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistical Sciences! rsalakhu@cs.toronto.edu! h0p://www.cs.utoronto.ca/~rsalakhu/ Lecture 7 Approximate
More informationComputer Vision Group Prof. Daniel Cremers. 6. Mixture Models and Expectation-Maximization
Prof. Daniel Cremers 6. Mixture Models and Expectation-Maximization Motivation Often the introduction of latent (unobserved) random variables into a model can help to express complex (marginal) distributions
More informationGentle Introduction to Infinite Gaussian Mixture Modeling
Gentle Introduction to Infinite Gaussian Mixture Modeling with an application in neuroscience By Frank Wood Rasmussen, NIPS 1999 Neuroscience Application: Spike Sorting Important in neuroscience and for
More informationK-Means and Gaussian Mixture Models
K-Means and Gaussian Mixture Models David Rosenberg New York University October 29, 2016 David Rosenberg (New York University) DS-GA 1003 October 29, 2016 1 / 42 K-Means Clustering K-Means Clustering David
More informationScribe to lecture Tuesday March
Scribe to lecture Tuesday March 16 2004 Scribe outlines: Message Confidence intervals Central limit theorem Em-algorithm Bayesian versus classical statistic Note: There is no scribe for the beginning of
More informationLecture 6: April 19, 2002
EE596 Pat. Recog. II: Introduction to Graphical Models Spring 2002 Lecturer: Jeff Bilmes Lecture 6: April 19, 2002 University of Washington Dept. of Electrical Engineering Scribe: Huaning Niu,Özgür Çetin
More informationHmms with variable dimension structures and extensions
Hmm days/enst/january 21, 2002 1 Hmms with variable dimension structures and extensions Christian P. Robert Université Paris Dauphine www.ceremade.dauphine.fr/ xian Hmm days/enst/january 21, 2002 2 1 Estimating
More informationSimple approximate MAP Inference for Dirichlet processes
Simple approximate MAP Inference for Dirichlet processes Yordan P. Rayov, Alexis Bououvalas, Max A. Little October 27, 2014 arxiv:1411.0939v1 [stat.ml] 4 Nov 2014 Abstract The Dirichlet process mixture
More informationInference in Explicit Duration Hidden Markov Models
Inference in Explicit Duration Hidden Markov Models Frank Wood Joint work with Chris Wiggins, Mike Dewar Columbia University November, 2011 Wood (Columbia University) EDHMM Inference November, 2011 1 /
More informationCS839: Probabilistic Graphical Models. Lecture 7: Learning Fully Observed BNs. Theo Rekatsinas
CS839: Probabilistic Graphical Models Lecture 7: Learning Fully Observed BNs Theo Rekatsinas 1 Exponential family: a basic building block For a numeric random variable X p(x ) =h(x)exp T T (x) A( ) = 1
More informationBayesian Nonparametrics: Dirichlet Process
Bayesian Nonparametrics: Dirichlet Process Yee Whye Teh Gatsby Computational Neuroscience Unit, UCL http://www.gatsby.ucl.ac.uk/~ywteh/teaching/npbayes2012 Dirichlet Process Cornerstone of modern Bayesian
More informationStochastic Variational Inference for the HDP-HMM
Stochastic Variational Inference for the HDP-HMM Aonan Zhang San Gultekin John Paisley Department of Electrical Engineering & Data Science Institute Columbia University, New York, NY Abstract We derive
More informationCS6220: DATA MINING TECHNIQUES
CS6220: DATA MINING TECHNIQUES Matrix Data: Classification: Part 2 Instructor: Yizhou Sun yzsun@ccs.neu.edu September 21, 2014 Methods to Learn Matrix Data Set Data Sequence Data Time Series Graph & Network
More informationGaussian Mixture Models
Gaussian Mixture Models Pradeep Ravikumar Co-instructor: Manuela Veloso Machine Learning 10-701 Some slides courtesy of Eric Xing, Carlos Guestrin (One) bad case for K- means Clusters may overlap Some
More informationGaussian Mixture Model
Case Study : Document Retrieval MAP EM, Latent Dirichlet Allocation, Gibbs Sampling Machine Learning/Statistics for Big Data CSE599C/STAT59, University of Washington Emily Fox 0 Emily Fox February 5 th,
More informationDegenerate Expectation-Maximization Algorithm for Local Dimension Reduction
Degenerate Expectation-Maximization Algorithm for Local Dimension Reduction Xiaodong Lin 1 and Yu Zhu 2 1 Statistical and Applied Mathematical Science Institute, RTP, NC, 27709 USA University of Cincinnati,
More informationOnline Bayesian Transfer Learning for Sequential Data Modeling
Online Bayesian Transfer Learning for Sequential Data Modeling....? Priyank Jaini Machine Learning, Algorithms and Theory Lab Network for Aging Research 2 3 Data of personal preferences (years) Data (non-existent)
More informationarxiv: v1 [cs.lg] 16 Sep 2014
Journal of Machine Learning Research (2000) -48 Submitted 4/00; Published 0/00 Collapsed Variational Bayes Inference of Infinite Relational Model arxiv:409.4757v [cs.lg] 6 Sep 204 Katsuhiko Ishiguro NTT
More informationEvolutionary Clustering by Hierarchical Dirichlet Process with Hidden Markov State
Evolutionary Clustering by Hierarchical Dirichlet Process with Hidden Markov State Tianbing Xu 1 Zhongfei (Mark) Zhang 1 1 Dept. of Computer Science State Univ. of New York at Binghamton Binghamton, NY
More information