STAT Advanced Bayesian Inference
|
|
- Jean Fisher
- 5 years ago
- Views:
Transcription
1 1 / 32 STAT Advanced Bayesian Inference Meng Li Department of Statistics Jan 23, 218
2 The Dirichlet distribution 2 / 32 θ Dirichlet(a 1,...,a k ) with density p(θ 1,θ 2,...,θ k ) = k j=1 Γ(a j) Γ( k j=1 a j) θ a j 1 j k j=1 Define α = k j=1 a j and π = (π 1,...,π j ) = a/α. θ a j 1 j. Expected value and variance of the Dirichlet(a 1,...,a k ) distribution E(θ j ) = a j α = π j V(θ j ) = E(θ j)[1 E(θ j )] 1 + α Note that α is a precision parameter (the prior sample size). Large α means low variance.
3 Conjugate analysis for multinomial data 3 / 32 Data: y = (n 1,...,n k ), where n j = number of items in category j. Prior θ Dirichlet(a 1,...,a k ) Likelihood Posterior p(n 1,n 2,...,n k θ 1,θ 2,...,θ k ) k θ n j j j=1 θ n 1,...,n k Dirichlet(n 1 + a 1,...,n k + a k ) Posterior expected value E(θ j n 1,...,n k ) = n j + a j n + α
4 Bayesian histograms 4 / 32 Histogram partitions the data space ξ < ξ 1 <... < ξ k and records how many observations end up in each bin (B h ). Multinomial data. Probability model for histograms f (y) = k h=1 1 ξh 1 <y ξ h ( π h ξ h ξ h 1 ), y R, where π = (π 1,...,π k ) is an unknown probability vector. n h = number of data points in partition (bin) h: ξ h 1 < y ξ h. Prior on π = (π 1,...,π k ) Posterior π Dirichlet(a 1,...,a k ) π n 1,...,n k Dirichlet(n 1 + a 1,...,n k + a k )
5 Illustration of Bayesian histograms 5 / ξ ξ 1 ξ 2 ξ 3 ξ 4 ξ 5 ξ 6 4 α P Data α P(B 1 )
6 Bayesian histograms, cont. Posterior π n 1,...,n k Dirichlet(n 1 + a 1,...,n k + a k ) Specify a 1,..,a k through π = (π 1,...,π k ) and α = k j=1 a j. Specify π from a base distribution P. For the hth bin: π h = P (B h ) = Pr(ξ h 1 < y ξ h ). Pros: Procedure is easy in that we have conjugacy. Allows prior information to be included in frequentist histogram estimates. Cons: Results are sensitive to knots. Allowing free knots is computationally demanding. Even averaging over random knots tends to produce bumps in the density estimate. Lacks smoothness: all pairs of bins have negative correlations, regardless of how near they are. 6 / 32
7 Bayesian histogram example 7 / 32 7 Data Prior.25 Posterior, α =
8 Bayesian histogram example 8 / Posterior, α = 1.25 Posterior, α = Posterior, α = 1.35 Posterior, α =
9 Histograms are sensitive to the choice of bins 9 / 32 Posterior, α = 1 Posterior, α = Posterior, α = 1.25 Posterior, α =
10 Simulation Experiment 1 / 32 Simulate data from the mixtures: f (y) =.75Beta(y 1,5) +.25Beta(y;2,2), with n = 1 samples. Assuming data between [, 1] and choosing 1 equally-spaced knots. Then apply the Bayes histogram approach.
11 Bayes Histogram Estimate 11 / 32
12 The Dirichlet process Let B 1,B 2,...,B k be a partition of the outcome space Ω. Let P(B 1 ),...,P(B k ) denote the distribution over the partition. Dirichlet distribution is a distribution over a space of distributions: (P(B 1 ),...,P(B k )) Dirichlet(αP (B 1 ),...,αp (B k )) where P is a fixed probability measure (e.g. the N(,1) density). Dirichlet distribution is closed under summation or splitting of bins. Can be used to define a stochastic process in a consistent way. Compare with GPs. A random probability measure P follows a Dirichlet process P DP(α P ) with base measure P and precision parameter α iff (P(B 1 ),...,P(B k )) Dirichlet(αP (B 1 ),...,αp (B k )) for any finite (measureable) partition B 1,...,B k. 12 / 32
13 The Dirichlet process - properties 13 / 32 If P DP(αP ) then Model Prior P(B) Beta[αP (B),α (1 P (B))], for any B B E [P(B)] = P (B) Var[P(B)] = P (B)[1 P (B)]/(1 + α) y i P iid P, for i = 1,...,n P DP(αP ) Posterior for a finite partition, P(B 1 ),...,P(B k ) y is ( ) n n Dirichlet αp (B 1 ) + 1 yi B 1,...,αP (B k ) + 1 yi B k i=1 i=1
14 The Dirichlet process - properties 14 / 32 Posterior for the unknown probability distribution P ( ) n P y 1,...,y n DP αp + δ yi Since so P(B) Beta ( αp (B) + E (P(B) y 1,...,y n ) = n i=1 i=1 1 yi B,α(1 P (B)) + n i=1 ( ) ( α n n P (B) + α + n α + n) i=1 1 yi B c ) 1 n 1 y i B c
15 Estimating a d.f. with a DP prior 15 / 32 If B = (,y] then where E (F(y) y 1,...,y n ) = F(y) is the unknown d.f. F (y) is the d.f. from P F n (y) = 1 n 1 y i y is the empirical d.f. ( ) ( ) α n F (y) + F n (y) α + n α + n Note: under the DP posterior, F( ) is discrete with probability one. Not great for continuous data... This is true in general: realisations from a DP are discrete with probability one.
16 Estimating a d.f. with a DP prior 16 / 32 α = F n empirical F prior Posterior α = α = 1 1 α =
17 Stick-breaking characterization of DP (Sethuraman) 17 / 32 P DP(αP ) is equivalent to an infinite mixture of point masses Stick picture P( ) = h=1 π h δ θi π h = V h (1 V l ) l<h V h iid Beta(1,α) θ h iid P Alternative notation for P DP(αP ): π = (π 1,π 2,...) Stick(α) and θ h P
18 Simulating stick-breaking priors α = 1 18 / 32
19 Simulating stick-breaking priors α = 1 19 / 32
20 Simulating stick-breaking priors α = 1 2 / 32
21 Simulating stick-breaking priors α = 1 21 / 32
22 Beyond DP - Pitman-Yor and Probit sticks 22 / 32 Pitman-Yor process with parameters P, a < 1 and b > a: iid P( ) = π h δ θi θ h P h=1 π h = V h (1 V l ) l<h V h iid Beta(1 a,b + ha) Probit stick-breaking with parameters µ and vσ: P( ) = h=1 π h δ θi θ h iid P π h = V h (1 V l ) l<h V h = Φ(x h ), where x h iid N(µ,σ 2 )
23 Finite mixture models 23 / 32 Mixture of normals p(y) = k j=1 π j φ(y; µ j,σ 2 j ) Use allocation variables: I i = j if y i comes from φ(y; µ j,σ 2 j ). Let I = (I 1,...,I n ) and n j = n i=1 (I i = j). Gibbs sampling algorithm: π 1,...,π k I,y Dirichlet(a 1 + n 1,a 2 + n 2,...,a k + n k ) σ 2 j I,y Inv-χ 2 and µ j I,σj 2,y N for j = 1,...,k. I i π, µ,σ 2,y Multinomial(ω i,1,...,ω i,k ), i = 1,...,n, where ω i,j = π j φ(y i ; µ j,σ 2 j ) k q=1 π q φ(y i ; µ q,σ 2 q ).
24 Infinite mixture models - DP mixtures 24 / 32 General mixture formulation f (y P) = K (y θ)dp(θ) where K (y θ) is a kernel and P(θ) is a mixing measure. Example 1: Student-t, t ν (µ,σ 2 ). K (y θ) = φ(y µ,λ) where µ is fixed, θ = λ and P(θ) is the Inv χ 2 (ν,σ 2 ) distribution. Example 2: Finite mixture of normals. φ(y µ,σ 2 ), θ = [ (µ,σ 2 ). P(θ) ] is a discrete distr. with Pr θ = (µ j,σj 2) = π j, for j = 1,...,k. Example 3: P DP(αP ) yields the infinite mixture f (y) = h=1 π h K (y θ h ), π Stick(α).
25 DP mixture is like a finite mixture with large k In infinite mixtures every observation has its own parameter θ i y i K (θ i ) DP is almost surely discrete ties: some of the θ i will have exactly the same values. DP leads to clustering of the θ i. Each observation has potentially its own parameter θ i, but that parameter may be shared by other observations. In finite mixture models each observation also has its own parameter y i I i K (θ Ii ) I i π Multinomial(π 1,...,π k ) θ i P π Dirichlet(α/k,...,α/k) Neal (2) shows that this finite mixture model approaches the DP mixture when k. 25 / 32
26 Marginalizing out P from a DP - Polya scheme 26 / 32 Hierarchical representation of DP mixtures y i K (θ i ), θ i P P DP(αP ) We can actually marginalize out P to obtain the Polya scheme ( ) ( ) α 1 i 1 p(θ i θ 1,...,θ i 1 ) P (θ i ) + α + i 1 α + i 1 δ θj j=1 So p(θ i θ 1,...,θ i 1 ) is a mixture of the base measure P and point masses at the previously drawn θ-values. Way to think about the scary Marginalizing out P : integrate out π in the finite mixture model and let k. [Neal, 2].
27 DPs and the Chinese restaurant process 27 / 32 The so called Polya scheme: ( ) α p(θ i θ 1,...,θ i 1 ) P (θ i ) + α + i 1 Chinese restaurant process: ( 1 α + i 1 ) i 1 δ θj j=1 first customer sits at empty table and obtains the dish θ 1 from P. second customer sits at first customer s table with probability 1+α 1 and has dish θ1 or sits at a new table with probability 1+α α and has dish θ 2 P.. the ith customer sits at table with dish θ j with a probability proportional to n j, the number of customers sitting at table j or sits at a new table with probability proportional to α.
28 Gibbs sampling DP mixtures - marginalizing P 28 / 32 Similar to Gibbs sampling for finite mixtures. Data augmentation with mixture component indicators I i. 1 Update component allocation for ith observation y i by sampling from multinomial { n ( i) j K (y i θj Pr(I i = j ) ) for j = 1,...,k( i) α K (y i θ)dp (θ) for j = k ( i) Update the unique parameter values θ by sampling from p(θj ) P (θc ) K (y i θj ) i:i j =j Note that, unlike finite mixtures, the I i are not independent conditional on θ. This because we have marginalized out P. They have to be sampled sequentially.
29 Gibbs sampling for truncated DP mixtures 29 / 32 Set upper bound N for the number of components. Approximate DP mixture with π h = for h = N + 1,... Posterior samping for infinite mixtures is now very similar to finite mixture. The I i can be sampled independently. 1 Update component allocation for ith observation y i by sampling from multinomial Pr(I i = j ) π j K (y i θ j ) for j = 1,2,...,N. 2 Update the stick-breaking weights [recall: π h = V h l<h (1 V l )] ( ) V j Beta 1 + n j,α + n q for j = 1,...,N 1. N q=j+1 3 Update the unique parameter values θ1,...θ N by sampling just like in the finite mixture model. Sample θ from prior P (θ) for empty clusters.
30 MCMC for DP mixtures Let s look at the updating step: { n ( i) j K (y i θc ) for j = 1,...,k ( i) Pr(I i = j ) α K (y i θ)dp (θ) for j = k ( i) + 1. A customer chooses table based on: the number of existing customers at the tables (with imaginary α customers at a new table) how compatible the taste of the customer (y i ) is to the different dishes served at occupied tables (θ c ) how compatible the taste of the customer (y i ) is to the different dishes that may be served at a new table. A P (θ) with large variance is equivalent to an very experimental cook. You never know what you get... Hyperparameter α clearly matters for the number of clusters (tables), but so does P. Hyperparameter α can be learned from data. P may contain hyperparameters (e.g. P = N(µ,σ 2 )). Just add updating steps for those. 3 / 32
31 Mixture of multivariate regressions - Model 31 / 32 The response vector y is p-dim. Covariates x is q-dim. The model is of the form p(y x) = j=1 π j N(y i B j x i,σ j ) Each component in the mixture is a Gaussian multivariate regression with its own regression coefficient and covariance matrix: y i = B j p 1 x i p qq 1 iid + ε i, ε i N (,Σ j ) p 1 The mixture weights follow a DP stick prior π Stick(α).
32 Mixture of multivariate regressions - Data 32 / y1 2 y x x2 y x1 y x2
33 Mixture of multivariate regressions - DPM 33 / 32 6 No. components: y1 y x x y2 5 y x x2
34 Mixture of multivariate regressions - DPM 34 / Posterior distribution of the number of components Number of components
Non-Parametric Bayes
Non-Parametric Bayes Mark Schmidt UBC Machine Learning Reading Group January 2016 Current Hot Topics in Machine Learning Bayesian learning includes: Gaussian processes. Approximate inference. Bayesian
More informationCS281B / Stat 241B : Statistical Learning Theory Lecture: #22 on 19 Apr Dirichlet Process I
X i Ν CS281B / Stat 241B : Statistical Learning Theory Lecture: #22 on 19 Apr 2004 Dirichlet Process I Lecturer: Prof. Michael Jordan Scribe: Daniel Schonberg dschonbe@eecs.berkeley.edu 22.1 Dirichlet
More informationNonparametric Bayes Uncertainty Quantification
Nonparametric Bayes Uncertainty Quantification David Dunson Department of Statistical Science, Duke University Funded from NIH R01-ES017240, R01-ES017436 & ONR Review of Bayes Intro to Nonparametric Bayes
More informationBayesian Nonparametrics
Bayesian Nonparametrics Lorenzo Rosasco 9.520 Class 18 April 11, 2011 About this class Goal To give an overview of some of the basic concepts in Bayesian Nonparametrics. In particular, to discuss Dirichelet
More informationBayesian Statistics. Debdeep Pati Florida State University. April 3, 2017
Bayesian Statistics Debdeep Pati Florida State University April 3, 2017 Finite mixture model The finite mixture of normals can be equivalently expressed as y i N(µ Si ; τ 1 S i ), S i k π h δ h h=1 δ h
More informationBayesian non-parametric model to longitudinally predict churn
Bayesian non-parametric model to longitudinally predict churn Bruno Scarpa Università di Padova Conference of European Statistics Stakeholders Methodologists, Producers and Users of European Statistics
More informationBayesian nonparametrics
Bayesian nonparametrics 1 Some preliminaries 1.1 de Finetti s theorem We will start our discussion with this foundational theorem. We will assume throughout all variables are defined on the probability
More informationCSCI 5822 Probabilistic Model of Human and Machine Learning. Mike Mozer University of Colorado
CSCI 5822 Probabilistic Model of Human and Machine Learning Mike Mozer University of Colorado Topics Language modeling Hierarchical processes Pitman-Yor processes Based on work of Teh (2006), A hierarchical
More informationOutline. Binomial, Multinomial, Normal, Beta, Dirichlet. Posterior mean, MAP, credible interval, posterior distribution
Outline A short review on Bayesian analysis. Binomial, Multinomial, Normal, Beta, Dirichlet Posterior mean, MAP, credible interval, posterior distribution Gibbs sampling Revisit the Gaussian mixture model
More informationBayesian Nonparametrics: Dirichlet Process
Bayesian Nonparametrics: Dirichlet Process Yee Whye Teh Gatsby Computational Neuroscience Unit, UCL http://www.gatsby.ucl.ac.uk/~ywteh/teaching/npbayes2012 Dirichlet Process Cornerstone of modern Bayesian
More informationFoundations of Nonparametric Bayesian Methods
1 / 27 Foundations of Nonparametric Bayesian Methods Part II: Models on the Simplex Peter Orbanz http://mlg.eng.cam.ac.uk/porbanz/npb-tutorial.html 2 / 27 Tutorial Overview Part I: Basics Part II: Models
More informationLecture 16-17: Bayesian Nonparametrics I. STAT 6474 Instructor: Hongxiao Zhu
Lecture 16-17: Bayesian Nonparametrics I STAT 6474 Instructor: Hongxiao Zhu Plan for today Why Bayesian Nonparametrics? Dirichlet Distribution and Dirichlet Processes. 2 Parameter and Patterns Reference:
More informationLecture 3a: Dirichlet processes
Lecture 3a: Dirichlet processes Cédric Archambeau Centre for Computational Statistics and Machine Learning Department of Computer Science University College London c.archambeau@cs.ucl.ac.uk Advanced Topics
More informationHierarchical Bayesian Languge Model Based on Pitman-Yor Processes. Yee Whye Teh
Hierarchical Bayesian Languge Model Based on Pitman-Yor Processes Yee Whye Teh Probabilistic model of language n-gram model Utility i-1 P(word i word i-n+1 ) Typically, trigram model (n=3) e.g., speech,
More informationSpatial Bayesian Nonparametrics for Natural Image Segmentation
Spatial Bayesian Nonparametrics for Natural Image Segmentation Erik Sudderth Brown University Joint work with Michael Jordan University of California Soumya Ghosh Brown University Parsing Visual Scenes
More informationDirichlet Processes: Tutorial and Practical Course
Dirichlet Processes: Tutorial and Practical Course (updated) Yee Whye Teh Gatsby Computational Neuroscience Unit University College London August 2007 / MLSS Yee Whye Teh (Gatsby) DP August 2007 / MLSS
More informationCPSC 540: Machine Learning
CPSC 540: Machine Learning MCMC and Non-Parametric Bayes Mark Schmidt University of British Columbia Winter 2016 Admin I went through project proposals: Some of you got a message on Piazza. No news is
More informationBayesian Methods for Machine Learning
Bayesian Methods for Machine Learning CS 584: Big Data Analytics Material adapted from Radford Neal s tutorial (http://ftp.cs.utoronto.ca/pub/radford/bayes-tut.pdf), Zoubin Ghahramni (http://hunch.net/~coms-4771/zoubin_ghahramani_bayesian_learning.pdf),
More informationNonparametric Bayesian Methods - Lecture I
Nonparametric Bayesian Methods - Lecture I Harry van Zanten Korteweg-de Vries Institute for Mathematics CRiSM Masterclass, April 4-6, 2016 Overview of the lectures I Intro to nonparametric Bayesian statistics
More informationPATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 2: PROBABILITY DISTRIBUTIONS
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 2: PROBABILITY DISTRIBUTIONS Parametric Distributions Basic building blocks: Need to determine given Representation: or? Recall Curve Fitting Binary Variables
More informationBayesian Nonparametric Models
Bayesian Nonparametric Models David M. Blei Columbia University December 15, 2015 Introduction We have been looking at models that posit latent structure in high dimensional data. We use the posterior
More informationColouring and breaking sticks, pairwise coincidence losses, and clustering expression profiles
Colouring and breaking sticks, pairwise coincidence losses, and clustering expression profiles Peter Green and John Lau University of Bristol P.J.Green@bristol.ac.uk Isaac Newton Institute, 11 December
More informationStochastic Processes, Kernel Regression, Infinite Mixture Models
Stochastic Processes, Kernel Regression, Infinite Mixture Models Gabriel Huang (TA for Simon Lacoste-Julien) IFT 6269 : Probabilistic Graphical Models - Fall 2018 Stochastic Process = Random Function 2
More informationBayesian Nonparametrics: Models Based on the Dirichlet Process
Bayesian Nonparametrics: Models Based on the Dirichlet Process Alessandro Panella Department of Computer Science University of Illinois at Chicago Machine Learning Seminar Series February 18, 2013 Alessandro
More informationA Brief Overview of Nonparametric Bayesian Models
A Brief Overview of Nonparametric Bayesian Models Eurandom Zoubin Ghahramani Department of Engineering University of Cambridge, UK zoubin@eng.cam.ac.uk http://learning.eng.cam.ac.uk/zoubin Also at Machine
More informationNonparametric Mixed Membership Models
5 Nonparametric Mixed Membership Models Daniel Heinz Department of Mathematics and Statistics, Loyola University of Maryland, Baltimore, MD 21210, USA CONTENTS 5.1 Introduction................................................................................
More informationNon-parametric Clustering with Dirichlet Processes
Non-parametric Clustering with Dirichlet Processes Timothy Burns SUNY at Buffalo Mar. 31 2009 T. Burns (SUNY at Buffalo) Non-parametric Clustering with Dirichlet Processes Mar. 31 2009 1 / 24 Introduction
More informationInfinite latent feature models and the Indian Buffet Process
p.1 Infinite latent feature models and the Indian Buffet Process Tom Griffiths Cognitive and Linguistic Sciences Brown University Joint work with Zoubin Ghahramani p.2 Beyond latent classes Unsupervised
More informationBayesian Learning. HT2015: SC4 Statistical Data Mining and Machine Learning. Maximum Likelihood Principle. The Bayesian Learning Framework
HT5: SC4 Statistical Data Mining and Machine Learning Dino Sejdinovic Department of Statistics Oxford http://www.stats.ox.ac.uk/~sejdinov/sdmml.html Maximum Likelihood Principle A generative model for
More informationDirichlet Process. Yee Whye Teh, University College London
Dirichlet Process Yee Whye Teh, University College London Related keywords: Bayesian nonparametrics, stochastic processes, clustering, infinite mixture model, Blackwell-MacQueen urn scheme, Chinese restaurant
More informationIntroduction to Probabilistic Machine Learning
Introduction to Probabilistic Machine Learning Piyush Rai Dept. of CSE, IIT Kanpur (Mini-course 1) Nov 03, 2015 Piyush Rai (IIT Kanpur) Introduction to Probabilistic Machine Learning 1 Machine Learning
More informationClustering using Mixture Models
Clustering using Mixture Models The full posterior of the Gaussian Mixture Model is p(x, Z, µ,, ) =p(x Z, µ, )p(z )p( )p(µ, ) data likelihood (Gaussian) correspondence prob. (Multinomial) mixture prior
More informationBayesian non parametric approaches: an introduction
Introduction Latent class models Latent feature models Conclusion & Perspectives Bayesian non parametric approaches: an introduction Pierre CHAINAIS Bordeaux - nov. 2012 Trajectory 1 Bayesian non parametric
More informationNon-parametric Bayesian Methods
Non-parametric Bayesian Methods Uncertainty in Artificial Intelligence Tutorial July 25 Zoubin Ghahramani Gatsby Computational Neuroscience Unit University College London, UK Center for Automated Learning
More informationCSC 2541: Bayesian Methods for Machine Learning
CSC 2541: Bayesian Methods for Machine Learning Radford M. Neal, University of Toronto, 2011 Lecture 4 Problem: Density Estimation We have observed data, y 1,..., y n, drawn independently from some unknown
More informationHierarchical Dirichlet Processes with Random Effects
Hierarchical Dirichlet Processes with Random Effects Seyoung Kim Department of Computer Science University of California, Irvine Irvine, CA 92697-34 sykim@ics.uci.edu Padhraic Smyth Department of Computer
More informationSTAT 518 Intro Student Presentation
STAT 518 Intro Student Presentation Wen Wei Loh April 11, 2013 Title of paper Radford M. Neal [1999] Bayesian Statistics, 6: 475-501, 1999 What the paper is about Regression and Classification Flexible
More informationGAUSSIAN PROCESS REGRESSION
GAUSSIAN PROCESS REGRESSION CSE 515T Spring 2015 1. BACKGROUND The kernel trick again... The Kernel Trick Consider again the linear regression model: y(x) = φ(x) w + ε, with prior p(w) = N (w; 0, Σ). The
More information19 : Bayesian Nonparametrics: The Indian Buffet Process. 1 Latent Variable Models and the Indian Buffet Process
10-708: Probabilistic Graphical Models, Spring 2015 19 : Bayesian Nonparametrics: The Indian Buffet Process Lecturer: Avinava Dubey Scribes: Rishav Das, Adam Brodie, and Hemank Lamba 1 Latent Variable
More informationPattern Recognition and Machine Learning. Bishop Chapter 2: Probability Distributions
Pattern Recognition and Machine Learning Chapter 2: Probability Distributions Cécile Amblard Alex Kläser Jakob Verbeek October 11, 27 Probability Distributions: General Density Estimation: given a finite
More informationA Process over all Stationary Covariance Kernels
A Process over all Stationary Covariance Kernels Andrew Gordon Wilson June 9, 0 Abstract I define a process over all stationary covariance kernels. I show how one might be able to perform inference that
More informationBayesian Nonparametrics for Speech and Signal Processing
Bayesian Nonparametrics for Speech and Signal Processing Michael I. Jordan University of California, Berkeley June 28, 2011 Acknowledgments: Emily Fox, Erik Sudderth, Yee Whye Teh, and Romain Thibaux Computer
More informationBayesian Nonparametric Models on Decomposable Graphs
Bayesian Nonparametric Models on Decomposable Graphs François Caron INRIA Bordeaux Sud Ouest Institut de Mathématiques de Bordeaux University of Bordeaux, France francois.caron@inria.fr Arnaud Doucet Departments
More informationImage segmentation combining Markov Random Fields and Dirichlet Processes
Image segmentation combining Markov Random Fields and Dirichlet Processes Jessica SODJO IMS, Groupe Signal Image, Talence Encadrants : A. Giremus, J.-F. Giovannelli, F. Caron, N. Dobigeon Jessica SODJO
More informationClassical and Bayesian inference
Classical and Bayesian inference AMS 132 January 18, 2018 Claudia Wehrhahn (UCSC) Classical and Bayesian inference January 18, 2018 1 / 9 Sampling from a Bernoulli Distribution Theorem (Beta-Bernoulli
More informationA Simple Proof of the Stick-Breaking Construction of the Dirichlet Process
A Simple Proof of the Stick-Breaking Construction of the Dirichlet Process John Paisley Department of Computer Science Princeton University, Princeton, NJ jpaisley@princeton.edu Abstract We give a simple
More informationProbabilistic Graphical Models
School of Computer Science Probabilistic Graphical Models Infinite Feature Models: The Indian Buffet Process Eric Xing Lecture 21, April 2, 214 Acknowledgement: slides first drafted by Sinead Williamson
More informationNonparametric Bayesian Methods (Gaussian Processes)
[70240413 Statistical Machine Learning, Spring, 2015] Nonparametric Bayesian Methods (Gaussian Processes) Jun Zhu dcszj@mail.tsinghua.edu.cn http://bigml.cs.tsinghua.edu.cn/~jun State Key Lab of Intelligent
More informationVariational Bayesian Dirichlet-Multinomial Allocation for Exponential Family Mixtures
17th Europ. Conf. on Machine Learning, Berlin, Germany, 2006. Variational Bayesian Dirichlet-Multinomial Allocation for Exponential Family Mixtures Shipeng Yu 1,2, Kai Yu 2, Volker Tresp 2, and Hans-Peter
More informationBayesian linear regression
Bayesian linear regression Linear regression is the basis of most statistical modeling. The model is Y i = X T i β + ε i, where Y i is the continuous response X i = (X i1,..., X ip ) T is the corresponding
More informationFoundations of Statistical Inference
Foundations of Statistical Inference Julien Berestycki Department of Statistics University of Oxford MT 2015 Julien Berestycki (University of Oxford) SB2a MT 2015 1 / 16 Lecture 16 : Bayesian analysis
More informationConstruction of Dependent Dirichlet Processes based on Poisson Processes
1 / 31 Construction of Dependent Dirichlet Processes based on Poisson Processes Dahua Lin Eric Grimson John Fisher CSAIL MIT NIPS 2010 Outstanding Student Paper Award Presented by Shouyuan Chen Outline
More informationPMR Learning as Inference
Outline PMR Learning as Inference Probabilistic Modelling and Reasoning Amos Storkey Modelling 2 The Exponential Family 3 Bayesian Sets School of Informatics, University of Edinburgh Amos Storkey PMR Learning
More informationDirichlet Processes and other non-parametric Bayesian models
Dirichlet Processes and other non-parametric Bayesian models Zoubin Ghahramani http://learning.eng.cam.ac.uk/zoubin/ zoubin@cs.cmu.edu Statistical Machine Learning CMU 10-702 / 36-702 Spring 2008 Model
More informationCMPS 242: Project Report
CMPS 242: Project Report RadhaKrishna Vuppala Univ. of California, Santa Cruz vrk@soe.ucsc.edu Abstract The classification procedures impose certain models on the data and when the assumption match the
More informationSTA414/2104. Lecture 11: Gaussian Processes. Department of Statistics
STA414/2104 Lecture 11: Gaussian Processes Department of Statistics www.utstat.utoronto.ca Delivered by Mark Ebden with thanks to Russ Salakhutdinov Outline Gaussian Processes Exam review Course evaluations
More informationDensity Modeling and Clustering Using Dirichlet Diffusion Trees
p. 1/3 Density Modeling and Clustering Using Dirichlet Diffusion Trees Radford M. Neal Bayesian Statistics 7, 2003, pp. 619-629. Presenter: Ivo D. Shterev p. 2/3 Outline Motivation. Data points generation.
More informationThe Metropolis-Hastings Algorithm. June 8, 2012
The Metropolis-Hastings Algorithm June 8, 22 The Plan. Understand what a simulated distribution is 2. Understand why the Metropolis-Hastings algorithm works 3. Learn how to apply the Metropolis-Hastings
More informationMultinomial Data. f(y θ) θ y i. where θ i is the probability that a given trial results in category i, i = 1,..., k. The parameter space is
Multinomial Data The multinomial distribution is a generalization of the binomial for the situation in which each trial results in one and only one of several categories, as opposed to just two, as in
More informationBayesian nonparametric latent feature models
Bayesian nonparametric latent feature models Indian Buffet process, beta process, and related models François Caron Department of Statistics, Oxford Applied Bayesian Statistics Summer School Como, Italy
More informationBayesian Regression Linear and Logistic Regression
When we want more than point estimates Bayesian Regression Linear and Logistic Regression Nicole Beckage Ordinary Least Squares Regression and Lasso Regression return only point estimates But what if we
More informationBayesian Analysis for Natural Language Processing Lecture 2
Bayesian Analysis for Natural Language Processing Lecture 2 Shay Cohen February 4, 2013 Administrativia The class has a mailing list: coms-e6998-11@cs.columbia.edu Need two volunteers for leading a discussion
More informationBayesian Machine Learning
Bayesian Machine Learning Andrew Gordon Wilson ORIE 6741 Lecture 2: Bayesian Basics https://people.orie.cornell.edu/andrew/orie6741 Cornell University August 25, 2016 1 / 17 Canonical Machine Learning
More informationBayesian Nonparametrics
Bayesian Nonparametrics Peter Orbanz Columbia University PARAMETERS AND PATTERNS Parameters P(X θ) = Probability[data pattern] 3 2 1 0 1 2 3 5 0 5 Inference idea data = underlying pattern + independent
More informationBayesian nonparametric latent feature models
Bayesian nonparametric latent feature models François Caron UBC October 2, 2007 / MLRG François Caron (UBC) Bayes. nonparametric latent feature models October 2, 2007 / MLRG 1 / 29 Overview 1 Introduction
More informationAn Introduction to Bayesian Linear Regression
An Introduction to Bayesian Linear Regression APPM 5720: Bayesian Computation Fall 2018 A SIMPLE LINEAR MODEL Suppose that we observe explanatory variables x 1, x 2,..., x n and dependent variables y 1,
More information27 : Distributed Monte Carlo Markov Chain. 1 Recap of MCMC and Naive Parallel Gibbs Sampling
10-708: Probabilistic Graphical Models 10-708, Spring 2014 27 : Distributed Monte Carlo Markov Chain Lecturer: Eric P. Xing Scribes: Pengtao Xie, Khoa Luu In this scribe, we are going to review the Parallel
More informationChapter 8 PROBABILISTIC MODELS FOR TEXT MINING. Yizhou Sun Department of Computer Science University of Illinois at Urbana-Champaign
Chapter 8 PROBABILISTIC MODELS FOR TEXT MINING Yizhou Sun Department of Computer Science University of Illinois at Urbana-Champaign sun22@illinois.edu Hongbo Deng Department of Computer Science University
More informationUSEFUL PROPERTIES OF THE MULTIVARIATE NORMAL*
USEFUL PROPERTIES OF THE MULTIVARIATE NORMAL* 3 Conditionals and marginals For Bayesian analysis it is very useful to understand how to write joint, marginal, and conditional distributions for the multivariate
More informationProbability and Estimation. Alan Moses
Probability and Estimation Alan Moses Random variables and probability A random variable is like a variable in algebra (e.g., y=e x ), but where at least part of the variability is taken to be stochastic.
More informationCurve Fitting Re-visited, Bishop1.2.5
Curve Fitting Re-visited, Bishop1.2.5 Maximum Likelihood Bishop 1.2.5 Model Likelihood differentiation p(t x, w, β) = Maximum Likelihood N N ( t n y(x n, w), β 1). (1.61) n=1 As we did in the case of the
More informationICML Scalable Bayesian Inference on Point processes. with Gaussian Processes. Yves-Laurent Kom Samo & Stephen Roberts
ICML 2015 Scalable Nonparametric Bayesian Inference on Point Processes with Gaussian Processes Machine Learning Research Group and Oxford-Man Institute University of Oxford July 8, 2015 Point Processes
More informationLearning Bayesian network : Given structure and completely observed data
Learning Bayesian network : Given structure and completely observed data Probabilistic Graphical Models Sharif University of Technology Spring 2017 Soleymani Learning problem Target: true distribution
More informationHierarchical Bayesian Nonparametrics
Hierarchical Bayesian Nonparametrics Micha Elsner April 11, 2013 2 For next time We ll tackle a paper: Green, de Marneffe, Bauer and Manning: Multiword Expression Identification with Tree Substitution
More information10. Exchangeability and hierarchical models Objective. Recommended reading
10. Exchangeability and hierarchical models Objective Introduce exchangeability and its relation to Bayesian hierarchical models. Show how to fit such models using fully and empirical Bayesian methods.
More informationPractical Bayesian Optimization of Machine Learning. Learning Algorithms
Practical Bayesian Optimization of Machine Learning Algorithms CS 294 University of California, Berkeley Tuesday, April 20, 2016 Motivation Machine Learning Algorithms (MLA s) have hyperparameters that
More informationHybrid Dirichlet processes for functional data
Hybrid Dirichlet processes for functional data Sonia Petrone Università Bocconi, Milano Joint work with Michele Guindani - U.T. MD Anderson Cancer Center, Houston and Alan Gelfand - Duke University, USA
More informationBayesian Nonparametric Regression through Mixture Models
Bayesian Nonparametric Regression through Mixture Models Sara Wade Bocconi University Advisor: Sonia Petrone October 7, 2013 Outline 1 Introduction 2 Enriched Dirichlet Process 3 EDP Mixtures for Regression
More informationCollapsed Gibbs Sampler for Dirichlet Process Gaussian Mixture Models (DPGMM)
Collapsed Gibbs Sampler for Dirichlet Process Gaussian Mixture Models (DPGMM) Rajarshi Das Language Technologies Institute School of Computer Science Carnegie Mellon University rajarshd@cs.cmu.edu Sunday
More informationA Fully Nonparametric Modeling Approach to. BNP Binary Regression
A Fully Nonparametric Modeling Approach to Binary Regression Maria Department of Applied Mathematics and Statistics University of California, Santa Cruz SBIES, April 27-28, 2012 Outline 1 2 3 Simulation
More informationNonparmeteric Bayes & Gaussian Processes. Baback Moghaddam Machine Learning Group
Nonparmeteric Bayes & Gaussian Processes Baback Moghaddam baback@jpl.nasa.gov Machine Learning Group Outline Bayesian Inference Hierarchical Models Model Selection Parametric vs. Nonparametric Gaussian
More informationMotivation Scale Mixutres of Normals Finite Gaussian Mixtures Skew-Normal Models. Mixture Models. Econ 690. Purdue University
Econ 690 Purdue University In virtually all of the previous lectures, our models have made use of normality assumptions. From a computational point of view, the reason for this assumption is clear: combined
More informationLecture 2: Priors and Conjugacy
Lecture 2: Priors and Conjugacy Melih Kandemir melih.kandemir@iwr.uni-heidelberg.de May 6, 2014 Some nice courses Fred A. Hamprecht (Heidelberg U.) https://www.youtube.com/watch?v=j66rrnzzkow Michael I.
More informationBayesian Models in Machine Learning
Bayesian Models in Machine Learning Lukáš Burget Escuela de Ciencias Informáticas 2017 Buenos Aires, July 24-29 2017 Frequentist vs. Bayesian Frequentist point of view: Probability is the frequency of
More informationMetropolis-Hastings Algorithm
Strength of the Gibbs sampler Metropolis-Hastings Algorithm Easy algorithm to think about. Exploits the factorization properties of the joint probability distribution. No difficult choices to be made to
More informationPrésentation des travaux en statistiques bayésiennes
Présentation des travaux en statistiques bayésiennes Membres impliqués à Lille 3 Ophélie Guin Aurore Lavigne Thèmes de recherche Modélisation hiérarchique temporelle, spatiale, et spatio-temporelle Classification
More informationDynamic Generalized Linear Models
Dynamic Generalized Linear Models Jesse Windle Oct. 24, 2012 Contents 1 Introduction 1 2 Binary Data (Static Case) 2 3 Data Augmentation (de-marginalization) by 4 examples 3 3.1 Example 1: CDF method.............................
More informationarxiv: v1 [stat.ml] 20 Nov 2012
A survey of non-exchangeable priors for Bayesian nonparametric models arxiv:1211.4798v1 [stat.ml] 20 Nov 2012 Nicholas J. Foti 1 and Sinead Williamson 2 1 Department of Computer Science, Dartmouth College
More informationAges of stellar populations from color-magnitude diagrams. Paul Baines. September 30, 2008
Ages of stellar populations from color-magnitude diagrams Paul Baines Department of Statistics Harvard University September 30, 2008 Context & Example Welcome! Today we will look at using hierarchical
More informationShould all Machine Learning be Bayesian? Should all Bayesian models be non-parametric?
Should all Machine Learning be Bayesian? Should all Bayesian models be non-parametric? Zoubin Ghahramani Department of Engineering University of Cambridge, UK zoubin@eng.cam.ac.uk http://learning.eng.cam.ac.uk/zoubin/
More informationPart 3: Applications of Dirichlet processes. 3.2 First examples: applying DPs to density estimation and cluster
CS547Q Statistical Modeling with Stochastic Processes Winter 2011 Part 3: Applications of Dirichlet processes Lecturer: Alexandre Bouchard-Côté Scribe(s): Seagle Liu, Chao Xiong Disclaimer: These notes
More informationInfinite-State Markov-switching for Dynamic. Volatility Models : Web Appendix
Infinite-State Markov-switching for Dynamic Volatility Models : Web Appendix Arnaud Dufays 1 Centre de Recherche en Economie et Statistique March 19, 2014 1 Comparison of the two MS-GARCH approximations
More informationApplied Nonparametric Bayes
Applied Nonparametric Bayes Michael I. Jordan Department of Electrical Engineering and Computer Science Department of Statistics University of California, Berkeley http://www.cs.berkeley.edu/ jordan Acknowledgments:
More informationSharing Clusters Among Related Groups: Hierarchical Dirichlet Processes
Sharing Clusters Among Related Groups: Hierarchical Dirichlet Processes Yee Whye Teh (1), Michael I. Jordan (1,2), Matthew J. Beal (3) and David M. Blei (1) (1) Computer Science Div., (2) Dept. of Statistics
More informationProbabilistic modeling of NLP
Structured Bayesian Nonparametric Models with Variational Inference ACL Tutorial Prague, Czech Republic June 24, 2007 Percy Liang and Dan Klein Probabilistic modeling of NLP Document clustering Topic modeling
More informationData Mining Techniques. Lecture 3: Probability
Data Mining Techniques CS 6220 - Section 3 - Fall 2016 Lecture 3: Probability Jan-Willem van de Meent (credit: Zhao, CS 229, Bishop) Project Vote 1. Freeform: Develop your own project proposals 30% of
More informationDensity Estimation. Seungjin Choi
Density Estimation Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr http://mlg.postech.ac.kr/
More informationLecture 13 Fundamentals of Bayesian Inference
Lecture 13 Fundamentals of Bayesian Inference Dennis Sun Stats 253 August 11, 2014 Outline of Lecture 1 Bayesian Models 2 Modeling Correlations Using Bayes 3 The Universal Algorithm 4 BUGS 5 Wrapping Up
More informationLecture 16 : Bayesian analysis of contingency tables. Bayesian linear regression. Jonathan Marchini (University of Oxford) BS2a MT / 15
Lecture 16 : Bayesian analysis of contingency tables. Bayesian linear regression. Jonathan Marchini (University of Oxford) BS2a MT 2013 1 / 15 Contingency table analysis North Carolina State University
More informationMore Spectral Clustering and an Introduction to Conjugacy
CS8B/Stat4B: Advanced Topics in Learning & Decision Making More Spectral Clustering and an Introduction to Conjugacy Lecturer: Michael I. Jordan Scribe: Marco Barreno Monday, April 5, 004. Back to spectral
More information