Mixture of Gaussians Models
Outline Inference, Learning, and Maximum Likelihood Why Mixtures? Why Gaussians? Building up to the Mixture of Gaussians Single Gaussians Fully-Observed Mixtures Hidden Mixtures
Perception Involves Inference and Learning Must infer the hidden causes, α, of sensory data, x Sensory data: air pressure wave frequency composition, patterns of electromagnetic radiation Hidden causes: proverbial tigers in bushes, lecture slides, sentences Must learn the correct model for the relationship between hidden causes and sensory data Models will be parameterized, with parameters θ We will use quality of prediction as our figure of merit
Generative models inference explanation or prediction
Maximum Likelihood and Maximum a Posteriori The model parameters θ that make the data most probable are called the maximum likelihood parameters or hidden causes α or causes INFERENCE LEARNING
In practice, we maximize log-likelihoods Taking logs doesn t change the answer Logs turn multiplication into addition Logs turn many natural operations on probabilities into linear algebra operations Negative log probabilities arise naturally in information theory
The Maximum Likelihood Answer Depends on Model Class
Outline Inference, Learning, and Maximum Likelihood Why Mixtures? Why Gaussians? Building up to the Mixture of Gaussians Single Gaussians Fully-Observed Mixtures Hidden Mixtures
Why Mixtures?
What is a Mixture Model? DATA LIKELIHOOD PRIOR This is precisely analogous to using a basis to approximate a vector
Example Mixture Datasets Mixtures of Uniforms Spike-And-Gaussian Mixtures Mixtures of Gaussians
Why Gaussians?
Why Gaussians? Wikipedia
Why Gaussians? An unhelpfully terse answer. Gaussians satisfy a particular differential equation: From this differential equation, all the properties of the Gaussian family can be derived without solving for the explicit form. Gaussians are isotropic, Fourier transform of a Gaussian is a Gaussian, sum of Gaussian RVs is Gaussian, Central Limit Theorem See this blogpost for details: http://bit.ly/gaussian-diff-eq
Why Gaussians? Gaussians are everywhere, thanks to the Central Limit Theorem Gaussians are the maximum entropy distribution with a given center (mean) and spread (std dev) Inference on Gaussians is linear algebra
Central Limit Theorem Statistics: adding up independent random variables with finite variances results in a Gaussian distribution Science: if we assume that many small, independent random factors produce the noise in our results, we should see a Gaussian distribution
Central Limit Theorem in Action
Central Limit Theorem in Action
Why Gaussians? Gaussians are everywhere, thanks to the Central Limit Theorem Gaussians are the maximum entropy distribution with a given center (mean) and spread (std dev) Inference on Gaussians is linear algebra
Gaussians are a natural MAXENT distribution The principle of maximum entropy (MAXENT) will be covered in detail later Teaser: MAXENT maps statistics of data to probability distributions in a principled, faithful manner For the most common choice of statistic, mean ± s.d., the MAXENT is a Gaussian
Why Gaussians? Gaussians are everywhere, thanks to the Central Limit Theorem Gaussians are the maximum entropy distribution with a given center (mean) and spread (std dev) Inference on Gaussians is linear algebra
Inference with Gaussians is just linear algebra The log-probabilities of a Gaussian are a negative-definite quadratic form Quadratic forms can be mapped onto matrices So solving an inference problem becomes solving a linear algebra problem Linear algebra is the Scottie Pippen of mathematics
Outline Inference, Learning, and Maximum Likelihood Why Mixtures? Why Gaussians? Building up to the Mixture of Gaussians Single Gaussians Fully-Observed Mixtures Hidden Mixtures
What is a Gaussian Mixture Model? DATA LIKELIHOOD PRIOR Example
Maximum Likelihood for Gaussian Mixture Models Plan of Attack: 1. 2. 3. ML for a single Gaussian ML for a fully-observed mixture ML for a hidden mixture
Maximum Likelihood for a Single Gaussian
Maximum Likelihood for a Single Gaussian
Maximum Likelihood for a Single Gaussian By a similar argument:
Maximum Likelihood for Gaussian Mixture Models Plan of Attack: 1. 2. 3. ML for a single Gaussian ML for a fully-observed mixture ML for a hidden mixture
Maximum Likelihood for Fully-Observed Mixture Observed Mixture means we receive datapoints (x,α). Examples: classification (discrete), regression (continuous) Cats Dogs Memming Wordpress Blog
Maximum Likelihood for Fully-Observed Mixture For each mixture element, the problem is exactly the same - what are the parameters of a single Gaussian? Because we know which mixture each data point came from, we can solve all these problems separately, using the same method as for a single Gaussian. How do we figure out the mixture weights w?
Bonus: We Can Now Classify Unlabeled Datapoints We can label new datapoints x with a corresponding α using our model This is the key idea behind supervised learning approaches in general. How do we label them? Max Likelihood method - find the closest mean (in z-score units), that s our label Fully Bayesian method - maintain a distribution over the labels - p(α x ; θ) Dogs Cats Memming Wordpress Blog
Maximum Likelihood for Gaussian Mixture Models Plan of Attack: 1. 2. 3. ML for a single Gaussian ML for a fully-observed mixture ML for a hidden mixture
Hidden Variables Example: Spike Sorting Martinez, Quiran Quiroga, et al., 2009 Journal of Neuroscience Methods
Maximum Likelihood for Models with Hidden Variables p(x μ,σ,α) is the same, but now we don t have the labels α. Problem: if we had the labels, we could find the parameters (just as before), and if we had the parameters, we could compute the labels (again, just as before). It s a chicken-and-egg problem! Solution: let s just make-believe we have the parameters.
Our Clustering Algorithm on Spike Sorting Wikipedia
Our Clustering Algorithm on Spike Sorting Wikipedia
The K-Means Algorithm 1. Make up K values for the means of the clusters 2. Assign datapoints to clusters 3. 4. Usually initialized randomly Each datapoint is assigned to the nearest cluster Update the cluster means to the new empirical means Repeat 2-4.
Issues with K-Means 1. Cluster assignment step (inference) is not Bayesian 2. Small changes in data can cause big changes in behavior
Soft Clustering? Bishop - PRML
Expectation-Maximization for Means 1. 2. 3. Make up K values for the means (E) Infer p(α x) for each x and α (M) Update the means via weighted average a. 4. Weight the contribution of datapoint x by p(α x) Repeat 2-4.
Full Expectation-Maximization 1. 2. 3. Make up K values for the means, covariances, and mixture weights (E) Infer p(α x) for each x and α (M) Update the parameters with weighted averages a. 4. Weight the contribution of datapoint x by p(α x) Repeat 2-4.
E-Step: Bayes Rule for Inference
M-Step: Direct Maximization
Bonus Slides: Information Geometry
A Geometric View All Probability Distributions Family of Model Distributions
Geometry of MLE All Probability Distributions Family of Model Distributions (here, Gaussians with different means)
Geometry of EM Family of Data Labelings (same p(x), different p(a x)) All Probability Distributions Family of Model Distributions
Family of Data Labelings E-Step: Inference M-Step: Learning Family of Model Distributions