Latent Variable Framework for Modeling and Separating Single-Channel Acoustic Sources

Similar documents
SINGLE CHANNEL SPEECH MUSIC SEPARATION USING NONNEGATIVE MATRIX FACTORIZATION AND SPECTRAL MASKS. Emad M. Grais and Hakan Erdogan

Non-Negative Matrix Factorization And Its Application to Audio. Tuomas Virtanen Tampere University of Technology

Sparse and Shift-Invariant Feature Extraction From Non-Negative Data

Probabilistic Latent Variable Models as Non-Negative Factorizations

Sound Recognition in Mixtures

Machine Learning for Signal Processing Non-negative Matrix Factorization

Single Channel Signal Separation Using MAP-based Subspace Decomposition

Non-negative Matrix Factorization: Algorithms, Extensions and Applications

Machine Learning for Signal Processing Non-negative Matrix Factorization

Nonnegative Matrix Factor 2-D Deconvolution for Blind Single Channel Source Separation

Non-negative Matrix Factor Deconvolution; Extraction of Multiple Sound Sources from Monophonic Inputs

REVIEW OF SINGLE CHANNEL SOURCE SEPARATION TECHNIQUES

MULTI-RESOLUTION SIGNAL DECOMPOSITION WITH TIME-DOMAIN SPECTROGRAM FACTORIZATION. Hirokazu Kameoka

Analysis of polyphonic audio using source-filter model and non-negative matrix factorization

Single Channel Music Sound Separation Based on Spectrogram Decomposition and Note Classification

Sparseness Constraints on Nonnegative Tensor Decomposition

Independent Component Analysis and Unsupervised Learning

On Spectral Basis Selection for Single Channel Polyphonic Music Separation

CONVOLUTIVE NON-NEGATIVE MATRIX FACTORISATION WITH SPARSENESS CONSTRAINT

Environmental Sound Classification in Realistic Situations

A METHOD OF ICA IN TIME-FREQUENCY DOMAIN

Gaussian Processes for Audio Feature Extraction

LONG-TERM REVERBERATION MODELING FOR UNDER-DETERMINED AUDIO SOURCE SEPARATION WITH APPLICATION TO VOCAL MELODY EXTRACTION.

Independent Component Analysis and Unsupervised Learning. Jen-Tzung Chien

Deep NMF for Speech Separation

Source Separation Tutorial Mini-Series III: Extensions and Interpretations to Non-Negative Matrix Factorization

ORTHOGONALITY-REGULARIZED MASKED NMF FOR LEARNING ON WEAKLY LABELED AUDIO DATA. Iwona Sobieraj, Lucas Rencker, Mark D. Plumbley

Matrix Factorization for Speech Enhancement

Introduction to Biomedical Engineering

Bayesian Hierarchical Modeling for Music and Audio Processing at LabROSA

Single-channel source separation using non-negative matrix factorization

NMF WITH SPECTRAL AND TEMPORAL CONTINUITY CRITERIA FOR MONAURAL SOUND SOURCE SEPARATION. Julian M. Becker, Christian Sohn and Christian Rohlfing

A NEURAL NETWORK ALTERNATIVE TO NON-NEGATIVE AUDIO MODELS. University of Illinois at Urbana-Champaign Adobe Research

Blind Machine Separation Te-Won Lee

SUPERVISED NON-EUCLIDEAN SPARSE NMF VIA BILEVEL OPTIMIZATION WITH APPLICATIONS TO SPEECH ENHANCEMENT

Discovering Convolutive Speech Phones using Sparseness and Non-Negativity Constraints

Blind Spectral-GMM Estimation for Underdetermined Instantaneous Audio Source Separation

x 1 (t) Spectrogram t s

A Coupled Helmholtz Machine for PCA

EUSIPCO

An Efficient Posterior Regularized Latent Variable Model for Interactive Sound Source Separation

A Variance Modeling Framework Based on Variational Autoencoders for Speech Enhancement

Robust Sound Event Detection in Continuous Audio Environments

A Source Localization/Separation/Respatialization System Based on Unsupervised Classification of Interaural Cues

ACCOUNTING FOR PHASE CANCELLATIONS IN NON-NEGATIVE MATRIX FACTORIZATION USING WEIGHTED DISTANCES. Sebastian Ewert Mark D. Plumbley Mark Sandler

Higher Order Statistics

Machine Recognition of Sounds in Mixtures

ESTIMATION OF RELATIVE TRANSFER FUNCTION IN THE PRESENCE OF STATIONARY NOISE BASED ON SEGMENTAL POWER SPECTRAL DENSITY MATRIX SUBTRACTION

ACOUSTIC SCENE CLASSIFICATION WITH MATRIX FACTORIZATION FOR UNSUPERVISED FEATURE LEARNING. Victor Bisot, Romain Serizel, Slim Essid, Gaël Richard

Scalable audio separation with light Kernel Additive Modelling

Soft-LOST: EM on a Mixture of Oriented Lines

Introduction to Independent Component Analysis. Jingmei Lu and Xixi Lu. Abstract

Jacobi Algorithm For Nonnegative Matrix Factorization With Transform Learning

A State-Space Approach to Dynamic Nonnegative Matrix Factorization

Constrained Nonnegative Matrix Factorization with Applications to Music Transcription

An EM Algorithm for Localizing Multiple Sound Sources in Reverberant Environments

NONNEGATIVE MATRIX FACTORIZATION WITH TRANSFORM LEARNING. Dylan Fagot, Herwig Wendt and Cédric Févotte

Analysis of audio intercepts: Can we identify and locate the speaker?

Extraction of Individual Tracks from Polyphonic Music

AN INVERTIBLE DISCRETE AUDITORY TRANSFORM

Joint Filtering and Factorization for Recovering Latent Structure from Noisy Speech Data

A Probability Model for Interaural Phase Difference

Adapting Wavenet for Speech Enhancement DARIO RETHAGE JULY 12, 2017

Speech enhancement based on nonnegative matrix factorization with mixed group sparsity constraint

ESTIMATING TRAFFIC NOISE LEVELS USING ACOUSTIC MONITORING: A PRELIMINARY STUDY

THE task of identifying the environment in which a sound

Nonnegative Matrix Factorization with Markov-Chained Bases for Modeling Time-Varying Patterns in Music Spectrograms

A two-layer ICA-like model estimated by Score Matching

Learning Dictionaries of Stable Autoregressive Models for Audio Scene Analysis

Real-Time Pitch Determination of One or More Voices by Nonnegative Matrix Factorization

EXPLOITING LONG-TERM TEMPORAL DEPENDENCIES IN NMF USING RECURRENT NEURAL NETWORKS WITH APPLICATION TO SOURCE SEPARATION

Lecture Notes 5: Multiresolution Analysis

IN real-world audio signals, several sound sources are usually

Matrix Factorization & Latent Semantic Analysis Review. Yize Li, Lanbo Zhang

Exploring the Relationship between Conic Affinity of NMF Dictionaries and Speech Enhancement Metrics

Relative-Pitch Tracking of Multiple Arbitrary Sounds

COLLABORATIVE SPEECH DEREVERBERATION: REGULARIZED TENSOR FACTORIZATION FOR CROWDSOURCED MULTI-CHANNEL RECORDINGS. Sanna Wager, Minje Kim

Singer Identification using MFCC and LPC and its comparison for ANN and Naïve Bayes Classifiers

Learning the Semantic Correlation: An Alternative Way to Gain from Unlabeled Text

Non-negative matrix factorization and term structure of interest rates

Dimension Reduction (PCA, ICA, CCA, FLD,

Introduction Basic Audio Feature Extraction

Detection of Overlapping Acoustic Events Based on NMF with Shared Basis Vectors

Introduction to Machine Learning. PCA and Spectral Clustering. Introduction to Machine Learning, Slides: Eran Halperin

Large-Scale Feature Learning with Spike-and-Slab Sparse Coding

Signal Modeling Techniques in Speech Recognition. Hassan A. Kingravi

Covariance smoothing and consistent Wiener filtering for artifact reduction in audio source separation

TUTORIAL PART 1 Unsupervised Learning

PROBABILISTIC LATENT SEMANTIC ANALYSIS

"Robust Automatic Speech Recognition through on-line Semi Blind Source Extraction"

Convolutive Non-Negative Matrix Factorization for CQT Transform using Itakura-Saito Divergence

Probabilistic Latent Semantic Analysis

Is early vision optimised for extracting higher order dependencies? Karklin and Lewicki, NIPS 2005

CS229 Project: Musical Alignment Discovery

SIMULTANEOUS NOISE CLASSIFICATION AND REDUCTION USING A PRIORI LEARNED MODELS

Proc. of NCC 2010, Chennai, India

A Combined Statistical and Machine Learning Approach For Single Channel Speech Enhancement

Equivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation

From Bases to Exemplars, from Separa2on to Understanding Paris Smaragdis

FREQUENCY-DOMAIN IMPLEMENTATION OF BLOCK ADAPTIVE FILTERS FOR ICA-BASED MULTICHANNEL BLIND DECONVOLUTION

Transcription:

COGNITIVE AND NEURAL SYSTEMS, BOSTON UNIVERSITY Boston, Massachusetts Latent Variable Framework for Modeling and Separating Single-Channel Acoustic Sources Committee Chair: Readers: Prof. Daniel Bullock Prof. Barbara Shinn-Cunningham Dr. Paris Smaragdis Prof. Frank Guenther Reviewers: Dr. Bhiksha Raj Prof. Eric Schwartz Madhusudana Shashanka shashanka@cns.bu.edu 17 August 2007 Dissertation Defense

Outline Introduction Time-Frequency Structure Background Latent Variable Decomposition: A Probabilistic Framework Contributions Sparse Overcomplete Decomposition Conclusions 2

Introduction Introduction The achievements of the ear are indeed fabulous. While I am writing, my elder son rattles the fire rake in the stove, the infant babbles contentedly in his baby carriage, the church clock strikes the hour, In the vibrations of air striking my ear, all these sounds are superimposed into a single extremely complex stream of pressure waves. Without doubt the achievements of the ear are greater than those of the eye. Wolfgang Metzger, in Gesetze des Sehens (1953) Abridged in English and quoted by Reinier Plomp (2002) 3

Cocktail Party Effect Introduction Cocktail Party Effect Colin Cherry (1953) Our ability to follow one speaker in the presence of other sounds. The auditory system separates the input into distinct auditory objects. Challenging problem from a computational perspective. (Cocktail Party by SLAW, Maniscalco Gallery. From slides of Prof. Shinn-Cunningham, ARO 2006) 4

Cocktail Party Effect Introduction Cocktail Party Effect Fundamental questions How does the brain solve it? Is it possible to build a machine capable of solving it in a satisfactory manner? Need not mimic the brain Two cases Multi-channel (Human auditory system is an example with two sensors) Single-Channel 5

Introduction Source Separation: Formulation Source Separation s 1 (t)... s j (t)... x(t)= N s j (t) j=1 x(t) s N (t) Given just x(t), how to separate sources s j (t)? Problem: Indeterminacy Multiple ways in which source signals can be reconstructed from the available information 6

Introduction Source Separation: Approaches Source Separation Exact solutions not possible, but can approximate by utilizing information about the problem Psychoacoustically/Biologically inspired approach Understand how the auditory system solves the problem Utilize the insights gained (rules and heuristics) in the artificial system Engineering approach Utilize probability and signal processing theories to take advantage of known or hypothesized structure/statistics of the source signals and/or the mixing process 7

Introduction Source Separation: Approaches Source Separation Psychoacoustically inspired approach Seminal work of Bregman (1990) - Auditory Scene Analysis (ASA) Computational Auditory Scene Analysis (CASA) Computational implementations of the views outlined by Bregman (Rosenthal and Okuno, 1998) Limitations: reconcile subjective concepts (e.g. similarity, continuity ) with strictly deterministic computational platforms? Difficulty incorporating statistical information Engineering approach Most work has focused on multi-channel signals Blind Source Separation: Beamforming and ICA Unsuitable for single-channel signals 8

Introduction Source Separation: This Work Source Separation We take a machine learning approach in a supervised setting Assumption: One or more sources present in the mixture are known Analyze the sample waveforms of the known sources and extract characteristics unique to each one Utilize the learned information for source separation and other applications Focus on developing a probabilistic framework for modeling single-channel sounds Computational perspective, goal not to explain human auditory processing Provide a framework grounded in theory that allows principled extensions Aim is not just to build a particular separation system 9

Outline Introduction Time-Frequency Structure We need a representation of audio to proceed Latent Variable Decomposition: A Probabilistic Framework Sparse Overcomplete Decomposition Conclusions 10

Representation Time-Frequency Structure Audio Representation Time-domain representation Sampled waveform: each sample represents the sound pressure level at a particular time instant. AMPL TIME Time-Frequency representation FREQ TF representation shows energy in TF bins explicitly showing the variation along time and frequency. TIME 11

Time-Frequency Structure TF Representations Audio Representation Short Time Fourier Transform (STFT; Gabor, 1946) Piano time-frames: successive fixed-width snippets of the waveform (windowed and overlapping) Spectrogram: Fourier transforms of all time slices. The result for a given time slice is a spectral vector. Cymbals Female Speaker Other TF representations possible (different filter banks): only STFT considered in this work Constant-Q (Brown, 1991) Gammatone (Patterson et al. 1995) Gamma-chirp (Irino and Patterson, 1997) TF distributions (Laughlin et al. 1994) 12

Time-Frequency Structure Magnitude Spectrograms Audio Representation Short Time Fourier Transform (STFT; Gabor, 1946) + + Magnitude spectrograms: TF entries represent energy-like quantities that can be approximated to add additively in case of sound mixtures Phase information is ignored. Enough information present in the magnitude spectrogram, simple test: Speech with cymbals phase Cymbals with piano phase 13

TF Masks Time-Frequency Structure Modeling TF Structure Target Mixture Masked Mixture Time-Frequency Masks Popular in the CASA literature Assign higher weight to areas of the spectrogram where target is dominant Intuition: dominant source masks the energy of weaker ones in any TF bin, thus only such dominant TF bins are sufficient for reconstruction Reformulate the problem goal is to estimate the TF mask (Ideal Binary Mask; Wang, 2005) Utilize cues like harmonicity, F0 continuity, common onsets/offsets etc.: Synchrony strand (Cooke, 1991) TF Maps (Brown and Cooke, 1994) Correlograms (Weintraub, 1985; Slaney and Lyon, 1990) 14

TF Masks Time-Frequency Structure Modeling TF Structure Time-Frequency Masks: Limitations Target Implementation of fuzzy rules and heuristics from ASA; ad-hoc methods, difficulty incorporating statistical information (Roweis, 2000) Mixture Masked Mixture Assumption: energy sparsely distributed i.e. different sources are disjoint in their spectro-temporal content (Yilmaz and Rickard, 2004) performs well only on mixtures that exhibit well-defined regions in the TF plane corresponding to the various sources (van der Kouwe et al. 2001) 15

Time-Frequency Structure Basis Decomposition Methods Modeling TF Structure Data Vectors Basis Decomposition Idea: observed data vector can be expressed as a linear combination of a set of basis components Data vectors spectral vectors Intuition: every source exhibits characteristic structure that can be captured by a finite set of components + v t = K h kt w k k=1 V=WH Basis Components Mixture Weights 16

Time-Frequency Structure Basis Decomposition Methods Modeling TF Structure Basis Decomposition Methods FREQ + + Many Matrix Factorization methods available, e.g. PCA, ICA Toy example: PCA components can have negative values PCA Components But spectrogram values are positive interpretation? FREQ TIME 17

Time-Frequency Structure Basis Decomposition Methods Modeling TF Structure Mixture Weights H Basis Decomposition Methods Non-negative Matrix Factorization (Lee and Seung, 1999) Explicitly enforces non-negativity on both the factored matrices Useful for analyzing spectrograms (Smaragdis, 2004, Virtanen, 2006) Issues Can t incorporate prior biases Restricted to 2D representations W Basis Components FREQ TIME 18

Outline Introduction Time-Frequency Structure Latent Variable Decomposition: Probabilistic Framework Our alternate approach: Latent variable decomposition treating spectrograms as histograms Sparse Overcomplete Decomposition Conclusions 19

Latent Variables Latent Variable Decomposition Background Widely used in social and behavioral sciences Traced back to Spearman (1904), factor analytic models for Intelligence Testing Latent Class Models (Lazarsfeld and Henry, 1968) Principle of local independence (or the common cause criterion) If a latent variable underlies a number of observed variables, the observed variables conditioned on the latent variable should be independent 20

Latent Variable Decomposition Spectrograms as Histograms Generative Model f Generative Model Spectral vectors energy at various frequency bins Histograms of multiple draws from a frame-specific multinomial distribution over frequencies Each draw a quantum of energy FRAME t HISTOGRAM HISTOGRAM +1 Pick a ball Note color, update histogram f Place it back Multinomial Distribution underlying the t-th spectral vector P t (f) 21

Model Latent Variable Decomposition Framework P t (f) f 22

Model Latent Variable Decomposition Framework P t (f)= z P(f z)p t (z) Generative Model P t (z) P(f z) Mixture Multinomial Procedure Pick Latent Variable z (urn): P t (z) P(f z) Pick frequency f from urn: Repeat the process V t times, the total energy in the t-th frame f 23

Model Latent Variable Decomposition Framework Frame-specific spectral distribution Frame-specific mixture weights Generative Model P t (f)= z P(f z)p t (z) Mixture Multinomial Source-specific basis components... Procedure Pick Latent Variable z (urn): P t (z) P(f z) Pick frequency f from urn: Repeat the process V t times, the total energy in the t-th frame HISTOGRAM f 24

Model Latent Variable Decomposition Framework Frame-specific spectral distribution Frame-specific mixture weights Generative Model P t (f)= z P(f z)p t (z) Mixture Multinomial Source-specific basis components... Procedure Pick Latent Variable z (urn): P t (z) P(f z) Pick frequency f from urn: Repeat the process V t times, the total energy in the t-th frame HISTOGRAM f 25

Model Latent Variable Decomposition Framework Frame-specific spectral distribution Frame-specific mixture weights Generative Model P t (f)= z P(f z)p t (z) Mixture Multinomial Source-specific basis components... Procedure Pick Latent Variable z (urn): P t (z) P(f z) Pick frequency f from urn: Repeat the process V t times, the total energy in the t-th frame HISTOGRAM f 26

Model P t (f) Latent Variable Decomposition P t (f)= P(f z)p t (z) z P t (z) P(f z) Framework Generative Model Mixture Multinomial Procedure Pick Latent Variable z (urn): P t (z) P(f z) Pick frequency f from urn: Repeat the process V t times, the total energy in the t-th frame f 27

Latent Variable Decomposition Framework The mixture multinomial as a point in a simplex P t (f) P(f z) 28

Latent Variable Decomposition Learning the Model Framework Frame-specific spectral distribution Frame-specific mixture weights P t (f)= P(f z)p t (z) z Source-specific basis components Analysis V Given the spectrogram, estimate the parameters... P(f z) represent the latent structure, they underlie all the frames and hence characterize the source V 29

Latent Variable Decomposition Learning the Model: Geometry Model Geometry 3 Basis Vectors (010) (100) (001) Simplex Boundary Data Points Basis Vectors Convex Hull Spectral distributions and basis components are points in a simplex Estimation process: find corners of the convex hull that surrounds normalized spectral vectors in the simplex 30

Latent Variable Decomposition Framework Learning the Model: Parameter Estimation Expectation-Maximization Algorithm P t (z f)= P t(z)p(f z) z P t(z)p(f z) P(f z)= t V ftp t (z f) t V ftp t (z f) f P t (z)= f V ftp t (z f) f V ftp t (z f) z V ft Entries of the training spectrogram 31

Example Bases Latent Variable Decomposition Framework Speech Harp f Piano z t 32

Latent Variable Decomposition Source Separation Source Separation Harp Speech 33

Latent Variable Decomposition Source Separation Source Separation Harp Mixture Speech Harp Bases Speech Bases 34

Latent Variable Decomposition Source Separation Source Separation Mixture Spectrogram Model linear combination of individual sources P t (f)=p t (s 1 )P(f s 1 )+P t (s 2 )P(f s 2 ) P t (f)= ( P t (s 1 ) P t (z s 1 )P s1 (f z) ) + ( P t (s 2 ) P t (z s 2 )P s2 (f z) ) z {z s1 } z {z s2 } 35

Latent Variable Decomposition Source Separation Source Separation Expectation-Maximization Algorithm P t (s,z f)= P t (s)= s P t (s)p t (z s)p s (f z) s P t(s) z {z s } P t(z s)p s (f z) f V ft f V ft z {z s } P t(s,z f) z {z s } P t(s,z f) V ft P t (z s)= f V ftp t (s,z f) z {z s } f V ftp t (s,z f) Entries of the mixture spectrogram P t (f,s)=p t (s) z {z s } P t (z s)p s (f z) 36

Latent Variable Decomposition Source Separation Source Separation P t (f,s) P t (f)= P t (f)=p t (s 1 )P(f s 1 )+P t (s 2 )P(f s 2 ) ( ) ( ) P t (s 1 ) P t (z s 1 )P s1 (f z) + P t (s 2 ) P t (z s 2 )P s2 (f z) z {z s1 } z {z s2 } V ft Mixture Spectrogram ˆV ft (s)= P t(f,s) P t (f) V ft 37

Latent Variable Decomposition Source Separation Source Separation Mixture Reconstructions SNR Improvements 5.25 db 5.30 db g(x)=10log 10 ( SNR=g(R) g(v) ) f,t O2 ft f,t O fte jφ ft X ft e jφ ft 2 V Φ O R Magnitude of the mixture Phase of the mixture Mag. of the original signal Mag. of the reconstruction 38

Latent Variable Decomposition Source Separation Source Separation: Semi supervised Possible even if only one source is known Bases of other source estimated during separation Song FG BG Song FG BG Raise My Rent by David Gilmour Background music bases learned from 5 seconds of music-only clips in the song Lead guitar bases learned from the rest of the song Sunrise by Norah Jones Harder wave-file clipped Background music bases learned from 5 seconds of music-only segments of the song More eg: http://cns.bu.edu/~mvss/courses/speechseg/ = + Denoising Only speech known 39

Outline Introduction Time-Frequency Structure Latent Variable Decomposition: Probabilistic Framework Sparse Overcomplete Decomposition Learning more structure than the dimensionality will allow Conclusions 40

Sparse Overcomplete Decomposition Limitation of the Framework Sparsity Real signals exhibit complex spectral structure The number of components required to model this structure could be potentially large However, the latent variable framework has a limitation: The number of components that can be extracted is limited by the number of frequency bins in the TF representation ( an arbitrary choice in the context of ground truth). Extracting an overcomplete set of components leads to the problem of indeterminacy 41

Sparse Overcomplete Decomposition Overcompleteness: Geometry Geometry of Sparse Coding 3 Basis Vectors (010) 4 Basis Vectors (010) (100) (100) DATA (100) (010) (001) (001) 7 Basis Vectors (010) 10 Basis Vectors (010) (100) (100) (001) Simplex Boundary Data Points (001) (001) Overcomplete case As the number of bases increases, basis components migrate towards the corners of the simplex Accurately represent data, but lose data-specificity 42

Sparse Overcomplete Decomposition Geometry of Sparse Coding Indeterminacy in the Overcomplete Case Multiple solutions that have zero error indeterminacy 43

Sparse Overcomplete Decomposition Sparse Coding Geometry of Sparse Coding ABD, ACE, ACD ABE, ABG, ACG ACF Restriction use the fewest number of corners 44 At least three required for accurate representation The number of possible solutions greatly reduced, but still indeterminate Instead, we minimize the entropy of mixture weights

Sparsity Sparse Overcomplete Decomposition Sparsity Sparsity originated as a theoretical basis for sensory coding (Kanerva, 1988; Field, 1994; Olshausen and Field, 1996) Following Attneave (1954), Barlow (1959, 1961) to use information-theoretic principles to understand perception Has utility in computational models and engineering methods How to measure sparsity? fewer number of components more sparsity Number of non-zero mixture weights i.e. the L0 norm L0 hard to optimize; L1 (or L2 in certain cases) used as an approximation We use entropy of the mixture weights as the measure 45

Sparse Overcomplete Decomposition Sparsity Learning Sparse Codes: Entropic Prior Model -- P t (f)= z P(f z)p t (z) Estimate P(f z) such that entropy of P t (z) is minimized P t (z) Impose an entropic prior on P t (z) (Brand, 1999) 46 P e (θ) e βh(θ) where H(θ)= θ i logθ i i is the entropy β is the sparsity parameter that can be controlled P t (z) with high entropies are penalized with low probability MAP formulation solved using Lambert s W function

Sparse Overcomplete Decomposition Geometry of Sparse Coding Geometry of Sparse Coding (100) 7 Basis Vectors (010) 7 Basis Vectors (100) (010) Sparsity Param = 0.01 DATA (100) (010) Sparsity Param = 0.05 (001) (001) 7 Basis Vectors (010) 7 Basis Vectors (010) (100) (100) (001) Simplex Boundary Data Points Sparsity Param = 0.1 Sparsity Param = 0.3 (001) (001) Sparse Overcomplete case Sparse mixture weights spectral vectors must be close to a small number of corners, forcing the convex hull to be compact Simplex formed by bases shrinks to fit the data 47

Sparse Overcomplete Decomposition Sparse-coding: Illustration Examples f 1 2 3 P(f z) P t (z) 3 2 No Sparsity 1 t 48

Sparse Overcomplete Decomposition Sparse-coding: Illustration Examples f 4 1 2 3 4 P(f z) P t (z) 3 2 No Sparsity 1 t 49

Sparse Overcomplete Decomposition Sparse-coding: Illustration Examples f 4 1 2 3 4 P(f z) P t (z) 3 2 Sparse Mixture Weights 1 t 50

Sparse Overcomplete Decomposition Speech Bases Examples Trained without Sparse Mixture Weights Compact Code Trained with Sparse Mixture Weights Sparse-Overcomplete Code f f 51

Sparse Overcomplete Decomposition Entropy Trade-off Geometry of Sparse Coding 5.5 5 750 1000 Sparse-coding Geometry Average Entropy of P(f z) 4.5 4 3.5 3 Sparse mixture weights bases which are holistic representations of the data 2.5 0 0.005 0.01 0.05 0.1 0.15 0.2 0.25 0.3 0.5 0.7 0.9 1 Sparsity Parameter β Average Entropy of P t (z) 6 5 4 3 2 750 1000 Decrease in mixture-weight entropy increase in bases components entropy, components become more informative Empirical evidence 1 0 0 0.005 0.01 0.05 0.1 0.15 0.2 0.25 0.3 0.5 0.7 0.9 1 Sparsity Parameter β 52

Sparse Overcomplete Decomposition Source Separation: Results Source Separation Mixture 1 CC 3.82 db 3.80 db SC 8.90 db 9.16 db Mixture 2 CC 5.25 db 5.30 db SC 8.02 db 8.33 db 53 Red CC, compact code, 100 components Blue SC, sparse-overcomplete code, 1000 components, β = 0.7

Sparse Overcomplete Decomposition Source Separation: Results Source Separation 9 8 SNR SER Mixture 1 750 Basis Vectors Results db 7 6 5 Sparse-overcomplete code leads to better separation 4 3 2 8 7 0.005 0.01 0.05 0.1 0.15 0.2 0.25 0.3 0.5 0.7 0.9 Sparsity Parameter β SNR SER Mixture 1 1000 Basis Vectors Separation quality increases with increasing sparsity before tapering off at high sparsity values (> 0.7) 6 db 5 4 3 2 0.005 0.01 0.05 0.1 0.15 0.2 0.25 0.3 0.5 0.7 0.9 Sparsity Parameter β 54

Sparse Overcomplete Decomposition Other Applications Other Applications Framework is general, operates on non-negative data Text data (word counts), images etc. Examples 55

Sparse Overcomplete Decomposition Other Applications Other Applications Framework is general, operates on non-negative data Text data (word counts), images etc. Image Examples: Feature Extraction α = 0.05, β = 0 (a) α = β = 0 (d) α = 0, β = 10 α = 150, β = 0 (b) (c) (e) α = 0, β = 0.3 56

Sparse Overcomplete Decomposition Other Applications Other Applications Framework is general, operates on non-negative data Text data (word counts), images etc. Image Examples: Hand-written digit classification Basis Vectors Mixture Weights β = 0 β = 0.05 Pixel Image β = 0.5 β = 0.2 Basis Components Mixture Weights Data Vector 57

Sparse Overcomplete Decomposition Other Applications Other Applications Framework is general, operates on non-negative data Text data (word counts), images etc. Image Examples: Hand-written digit classification Percentage Error 5 4.5 4 3.5 3 25 50 75 100 200 2.5 2 0 0.05 0.1 0.2 0.3 Sparsity Parameter 58

Outline Introduction Time-Frequency Structure Latent Variable Decomposition: Probabilistic Framework Sparse Overcomplete Decomposition Conclusions In conclusion 59

Thesis Contributions Conclusions Thesis Contributions Modeling single-channel acoustic signals important applications in various fields Provides a probabilistic framework amenable to principled extensions and improvements Incorporates the idea of sparse coding in the framework Points to other extensions in the form of priors Theoretical analysis of models and algorithms Applicability to other data domains Six refereed publications in international conferences and workshops (ICASSP, ICA, NIPS), two manuscripts under review (IEEE TPAMI, NIPS) 60

Future Work Conclusions Future Work Representation Other TF representations (eg. constant-q, gammatone) Multidimensional representations (correlograms, higher order spectra) Utilize phase information in the representation Model and Theory Applications 61

Future Work Conclusions Future Work Representation Model and Theory Employ priors on parameters to impose known/hypothesized structure (Dirichlet, mixture Dirichlet, Logistic Normal) Explicitly model time structure using HMMs/other dynamic models Utilize discriminative learning paradigm Extract components that form independent subspaces, could be used for unsupervised separation Relation between sparse decomposition and non-negative ICA Extensions/improvements to inference algorithms (eg. tempered EM) Applications 62

Future Work Conclusions Future Work Representation Model and Theory Applications Other audio applications such as music transcription, speaker recognition, audio classification, language identification etc. Explore applications in data-mining, text semantic analysis, brainimaging analysis, radiology, chemical spectral analysis etc. 63

Acknowledgements Prof. Barbara Shinn-Cunningham Dr. Bhiksha Raj and Dr. Paris Smaragdis Thesis Committee Members Faculty/Staff at CNS and Hearing Research Center Scientists/Staff at Mitsubishi Electric Research Laboratories Friends and well-wishers Supported in part by the Air Force Office of Scientific Research (AFOSR FA9550-04-1-0260), the National Institutes of Health (NIH R01 DC05778), the National Science Foundation (NSF SBE- 0354378), and the Office of Naval Research (ONR N00014-01-1-0624). Arts and Sciences Dean s Fellowship, Teaching Fellowship Internships at Mitsubishi Electric Research Laboratories 64

Additional Slides 65

Publications Conclusions Thesis Contributions Refereed Publications and Manuscripts Under Review MVS Shashanka, B Raj, P Smaragdis. Probabilistic Latent Variable Model for Sparse Decompositions of Non-negative Data submitted to IEEE Transactions on Pattern Analysis And Machine Intelligence MVS Shashanka, B Raj, P Sparagdis. Sparse Overcomplete Latent Variable Decomposition of Counts Data submitted to NIPS 2007 P Smaragdis, B Raj, MVS Shashanka. Supervised and Semi-Supervised Separation of Sounds from Single-Channel Mixtures, Intl. Conf. on ICA, London, Sep 2007 MVS Shashanka, B Raj, P Smaragdis. Sparse Overcomplete Decomposition for Single Channel Speaker Separation, Intl. Conf. on Acoustics, Speech and Signal Processing, Honolulu, Apr 2007 B Raj, R Singh, MVS Shashanka, P Smaragdis. Bandwidth Expansion with a Polya Urn Model, Intl. Conf. on Acoustics, Speech and Signal Proc., Honolulu, Apr 2007 B Raj, P Smaragdis, MVS Shashanka, R Singh, Separating a Foreground Singer from Background Music, Intl Symposium on Frontiers of Research on Speech and Music, Mysore, India, Jan 2007 P Smaragdis, B Raj, MVS Shashanka. A Probabilistic Latent Variable Model for Acoustic Modeling, Workshop on Advances in Models for Acoustic Processing, NIPS 2006 B Raj, MVS Shashanka, P Smaragdis. Latent Dirichlet Decomposition for Single Channel Speaker Separation, Intl. Conf. on Acoustics, Speech and Signal Processing, Paris, May 2006 66